Proof of Concept: Restoring PostgreSQL Support for Mautic

Hello Mautic Community,

Like many of you, I’ve encountered limitations with MySQL/MariaDB databases in Mautic, such as column size restrictions and index limits per table, which have hindered my ability to scale effectively. To address this, I developed a small proof of concept (PoC) to run Mautic 4 (version 4.4.13, which I’m currently using) on PostgreSQL by converting a MySQL database. I’d like to share my approach, discuss the feasibility of restoring PostgreSQL support, and gather your feedback.

My Approach

To minimize changes to Mautic’s core codebase, I took the following steps:

  1. Analyzed MySQL Dependencies: I reviewed Mautic’s core and plugin code to identify MySQL-specific features and functions. Only a few MySQL functions required porting to PostgreSQL equivalents.

  2. Handled Boolean-Integer Comparisons: A significant challenge was MySQL queries comparing boolean values with integers, which are common in Mautic. Rewriting all queries would be time-intensive, so I implemented custom PostgreSQL operator functions to support these comparisons seamlessly.

  3. Iterative Testing and Fixes: Using a “try each Mautic feature and fix errors” approach, I identified and resolved issues. Surprisingly, only a few code changes were needed to get Mautic running smoothly on a ported PostgreSQL database.

  4. Scope Limitation: For this PoC, I focused on runtime compatibility and did not convert migration or installation files.

Current Status

My PoC demonstrates that running Mautic 4.4.13 on PostgreSQL is achievable with minimal code changes. I’m waiting for the official release of Mautic 7.0 to test and port this solution to the latest version.

Call for Discussion

I believe restoring PostgreSQL support could benefit the Mautic community, especially for users needing a more robust database to handle larger datasets or specific use cases. This PoC suggests it’s feasible with limited resources, but I’d love to hear your thoughts:

  • Are you interested in PostgreSQL support for Mautic? Why?

  • What challenges or concerns do you foresee in maintaining PostgreSQL compatibility?

  • Would you be willing to collaborate or test a PostgreSQL-compatible version?

Please note: This is a proof of concept, not official or unofficial support. I’m sharing this to gauge interest and spark discussion about reintroducing PostgreSQL as a supported database.Looking forward to your ideas and feedback!

Best regards,
Wieslaw Golec

1 Like

Hi @esio and thanks so much for taking the time to work on this POC, great job! This was something we were thinking of bringing in back in 2019 (see Multi-database support ) but nobody has got around to working on it yet.

One of the primary reasons we dropped multi-database support way back when was because so few people were using it and nobody was testing with it. This meant that critical bugs could come in without the developers knowing. We had very little test coverage at that time, which made things worse.

It’d be great, therefore, to have a working group or tiger team that would be responsible for the multi-database support if you wanted to introduce it back into core - they could become the ‘subject matter experts’ in that area, and ensure that new code being added is considering different types of database.

We now have a substantially wider coverage with automated tests which makes things a bit better when it comes to picking up problems, so you’ll also need to consider that the test suite (available when you clone Mautic from GitHub, more here) will also need to be reviewed and updated to ensure that it takes into account multi database support, along with updating our GitHub Actions accordingly to ensure that it’s running against PostGres as well as MySQL and MariaDB. We’ll need to document this for developers as well, because I guess it’ll also impact plugins and integration work.

I’ll let the core team know of your POC as well, so they can provide input. Thanks for raising the suggestion!

Hello esio,

Thanks you for your great work on restoring PostgreSQL support for Mautic. I have little expertise in PostgreSQL myself but I will keep an eager on eye on how this progresses moving forward.

I asked AI to compare PostgreSQL and Mariadb and there were no major differences in specifications that I could find so I am intrigued ?

However would you be kind enough to share your experiences running Mautic at scale with Mariadb as you have had sizing limitations? There is little information on running Mariadb at scale that I can find. Therefore any information you would share with community here would be gratefully accepted.

Would you possibly consider writing a separate forum post or case study or blog post on the subject for the benifit of everyone here?

Thanks.

Hi rcheesley,

Thanks for the feedback and context on the historical decision to drop PostgreSQL support. I understand the challenges with low adoption and testing coverage back then, which makes sense given the maintenance burden.

To clarify, my PoC for running Mautic 4.4.13 on PostgreSQL was designed to minimize core code changes while achieving compatibility. Here’s how I approached it:

  • MySQL Function Compatibility: I implemented missing MySQL functions in PostgreSQL to mirror their behavior exactly, ensuring no changes were needed in Mautic’s core queries or plugins.

  • Boolean-Integer Comparisons: I addressed the prevalent boolean-integer comparison issue in queries by adding custom PostgreSQL operators, avoiding the need to rewrite existing queries.

  • Minimal Core Changes: The PoC required only a handful of targeted adjustments to get Mautic running smoothly on a ported PostgreSQL database. Migration and installation scripts were not addressed in this phase.

The result is a lightweight solution that leverages PostgreSQL’s strengths (e.g., better handling of complex queries and large datasets) without significant codebase modifications. While I recognize the need for automated tests and documentation for full core integration, my focus was on proving feasibility with minimal overhead.

I appreciate you looping in the core team for input. I’d be interested in their thoughts on the viability of reintroducing PostgreSQL support, especially given the improved test coverage you mentioned. For now, I’m planning to port this PoC to Mautic 7.0 once it’s released. If there’s interest, I can share more technical details or discuss potential next steps for evaluating PostgreSQL as a supported database.

What are the core team’s main concerns or priorities for considering multi-database support? Are there specific blockers (e.g., plugin compatibility, performance, testing) you’d want addressed in a follow-up PoC?

Looking forward to the discussion.

Best,
Wieslaw Golec

Hi andrew_c3,

Thanks for your interest in the PostgreSQL proof of concept. I’m glad to hear someone is keeping an eye on its progress. Regarding your question about MySQL/MariaDB vs. PostgreSQL, while both databases have similar capabilities on paper, PostgreSQL often handles complex queries, larger datasets, and advanced indexing better, which motivated my PoC. That said, the differences become more apparent at scale or with specific workloads, which I’ll touch on below.

My Experience Running Mautic at Scale with MySQL

We’re running Mautic 4.4.13 on MySQL 8.0 (configured for 5.7 compatibility due to ONLY_FULL_GROUP_BY issues). Here’s a breakdown of our setup and challenges:

  • Database Size: At its peak, we had 4 million contacts (now reduced to 1.23 million to manage performance). The email_stats table reached 700GB, now around 350GB after optimization.

  • Custom Fields: We used nearly 150 fields (now 140), which pushed MySQL’s column and index limits, causing slowdowns.

  • Segments and Campaigns: Complex segments often referenced multiple other segments via membership conditions. We ran multiple campaigns simultaneously, including recurring ones, sending high email volumes daily.

  • Additional Features: Heavy use of user tracking and API for data sync, email templates, and dynamic content updates.

  • Workarounds: To manage the load, we developed custom plugins:

    • One plugin offloads email_stats data to a separate table while maintaining access to general stats (sent, bounces, complaints, etc.) in reports and dashboard widgets.

    • Another plugin processes SES (Amazon Simple Email Service) notifications for bounces, complaints, and suppressions, plus checks disposable domains/mailboxes. We validate contacts pre-import to filter out suppressed, disposable, or problematic emails.

  • Hardware: This runs on a server with 16GB RAM, 4 CPUs (8 threads), which is modest for the workload.

The main issues were query performance degradation due to table size, index limitations, and column constraints in MySQL. These pushed us to explore PostgreSQL, which seems promising for handling larger datasets and complex queries with fewer workarounds.

On Sharing a Case Study

I appreciate the suggestion to write a detailed case study or blog post. I’m open to it, but it would depend on community interest and my availability. For now, I’ve shared the key details above to spark discussion. If there’s enough demand, I could consider a dedicated forum post or contribute to the Mautic blog with a deeper dive into our setup, optimizations, and the PostgreSQL PoC.

What specific aspects of running Mautic at scale are you curious about? Are you facing similar bottlenecks with MariaDB, or is there a particular area (e.g., performance tuning, plugins) you’d like more details on? Also, if anyone else has tackled high-scale Mautic setups or explored PostgreSQL, I’d love to hear your experiences.

Looking forward to your thoughts.

Best,

Wieslaw Golec

Hello esio,

Thank you for sharing your experiences running Mautic and mysql at scale.

My professional working life has all been about working in technology on commercial projects.

I’m fairly new to Mautic and I have a small production system that has gone live recently. I’m evaluating its capabilities and leaning to write Mautic / Symphony code. Before I commit to using Mautic long term I need to be able to identify and fix and develop code myself and become totally self sufficient. That is a work in progress.

My preferred development languages at the moment are C, C++ and SQL. I’m now learning Symphony so I can write code for Mautic.

I hope to run at scale so I’m interested in learning more about how Mautic runs at scale as there are no published blogs or case studies etc on the subject. I’ve just found one or two youtube videos that discuss the subject and I think four million contact records was mentioned.

My understanding is that mysql and mariadb are now evolving in their own way. Have you tried mariadb ?

Most of my database work has been with Oracle but I believe mariadb is just as capable in my opinion. Others may have a different view. I know little about ProgresSQL and so I will follow your work with interest.

My first impression of why your Mautic instance may be running slow comes down to the hardware spec of the server. The specification you shared is about the minimum I would roll out for a simple file server and or an email server to support 10 users on-premise. Software defined storage needs two CPU and 2GB of RAM per storage device.

I think your database server is probably under resourced by a significant amount. I would try doubling the RAM which is cheap and see if that improves performance. Otherwise you need to do a proper server performance analysis. Or get a consultant who can help identify the bottlenecks.

I also think that php and Symfony is not the most efficient in terms of database performance and I have questioned performance in forum posts months ago.

In Mautic 7 Alpha release there is a PR about improving the performance of segment membership recalculations. I think there was a tenfold speed improvement which may help you. You may be able use that improvement and retrofit the code into Mautic 4.x. I think you said you write plugins.

Once I have the M7 Alpha installed om my platform I will be experimenting with code written in C and or C++ which you can call from Symfony to try and improve bottlenecks I have found. The php and Symfony purists will dislike that approach but C and C++ is way more efficient and I will happily share some comparative results. Again that is a work in progress.

From what I have read, I’m not convinced that swapping the database engine will give you a significant speed improvement on the same hardware platform in my opinion but I could be wrong.

I wish you well with your ProgressSQL PoC.

Thanks.

Hi andrew_c3,

Thanks for sharing your background and insights. It’s good to hear someone is diving into Mautic and Symfony development to become self-sufficient. I’ll address your points about running Mautic at scale and clarify why PostgreSQL is a compelling option for my use case.

MySQL/MariaDB Experience and Bottlenecks

To answer your question, we’re using MySQL 8.0 (in 5.7 compatibility mode due to ONLY_FULL_GROUP_BY issues), not MariaDB. I haven’t tested MariaDB specifically, but given its divergence from MySQL, I’d expect similar limitations for our workload. Our setup includes 1.23 million contacts (down from 4 million), a 350GB email_stats table, and 140 custom fields. We run complex segments, multiple campaigns, user tracking, and API integrations on a server with 16GB RAM and 4 CPUs (8 threads).

The primary bottlenecks aren’t code execution (e.g., PHP/Symfony inefficiencies) but database limitations:

  • Custom Field Limits: MySQL’s row size and column restrictions cap our ability to add more custom fields (we’re near the 150-field ceiling).

  • Index Limits: Complex segment criteria require custom indexes, but MySQL’s 64-index-per-table limit prevents us from optimizing queries effectively.

  • Query Performance: Large tables and complex queries (e.g., segment membership recalculations) strain performance, even with custom plugins offloading email_stats data.

While adding RAM or CPU could help query speed, it doesn’t address the structural limitations preventing growth. Scaling hardware is a temporary fix, not a solution for expanding custom fields or indexes.

Why PostgreSQL?

My PoC was initially driven by scalability, not just performance. PostgreSQL avoids MySQL’s hard limits on columns and indexes, allowing us to grow our dataset (more custom fields, complex segments) without hitting ceilings. In my tests, PostgreSQL handled our workload with minimal code changes—mostly adding MySQL-compatible functions and custom operators for boolean-integer comparisons. While I haven’t benchmarked speed extensively, the ability to add indexes freely and support more fields is a game-changer for our use case.

I’m skeptical that retrofitting Mautic 7’s segment recalculation improvements into 4.4.13 would solve these structural issues, as the database constraints remain. That said, I’ll look into the Mautic 7 Alpha PR you mentioned—thanks for the pointer.

On C/C++ Integration

I appreciate your approach to experimenting with C/C++ for performance. However, since our bottlenecks are database-driven (not PHP/Symfony execution), I don’t see C/C++ integration addressing our core issues. Modifying core code or writing extensive plugins also risks complicating upgrades to future Mautic versions, which we’re avoiding to keep maintenance manageable.

Next Steps

I’m not convinced swapping databases will yield massive speed gains on the same hardware either, but speed isn’t the primary goal—scalability is. PostgreSQL’s flexibility allows us to grow without the hacks required in MySQL. I’ll continue refining the PoC for Mautic 7.0 and share updates as it progresses.

Best,

Wieslaw Golec

Hello esio,

Hello,

Now I understand your problem and your thinking.

It appears that Mautic implements custom fields by adding them to the leads table. Therefore if you have many custom fields the table becomes very wide and mariadb hits a hard limit. Whereas ProgresSQL may allow much wider tables so you can create many more custom fields. ProgressSQL may allow more indexes per table to cater for the much wider table. Yes ProgressSQL may be a solution for you.

However I don’t agree with very wide tables and hundred of indexes on the same table from a design or implementation perspective. You will eventually hit ProgressSQL limits and then what are you going to do ? And I don’t believe it will scale very well either. But I will follow your work with interest.

I have a different approach in mind. I wish to add a lot of extra functionality into Mautic and yes I will use some custom fields. I plan to use Mautic as released and then add my custom logic by writing custom modules possibly in the form of plugins. The custom modules will use additional database tables in the same and or separate databases etc. As I will control that database scheme so I can design it to scale and not hit a database hard limit.

Php is a good programming language and I will add plenty into custom modules. However I need to code quickly and break it down into threads that runs on hundreds of CPU cores and terabytes of memory, on one or many servers and I cant achieve that with php very quickly but I can with C and C++.

I’ve tried making recommendations on the forum on ways to improve Mautic without much success. I can implement the code faster than writing forum posts etc. That way Mautic can follow its design path and my custom developments can follow a different path unhindered by the former. Everyone is a winner.

Thanks.

Hello esio,

Just a thought.

You have said that the maximum number of custom fields on a mariadb database is approx. 150 according to your real world tests.

if you have Mautic connected to ProgressSQL, for the benefit of the community do a simple test to find out how many custom fields and indexes can be create before hitting a hard limit ?

That comparison would be wonderful thing to see and maybe the Mautic developers could publish the results for the benefit of the community.

Thanks.

Hi andrew_c3,

Thanks for your reply and for sharing your approach. I appreciate your perspective on database design and your plans for scaling Mautic. Let me address your points and respond to your request for testing PostgreSQL limits.

Clarifying MySQL Limitations and PostgreSQL’s Role

You’re correct that Mautic’s implementation of custom fields in the leads table causes issues in MySQL. In our setup, we hit a practical limit of around 140–150 custom fields due to row size and column constraints. The 64-index-per-table limit further complicates things, as our complex segments require custom indexes for performance, which MySQL can’t accommodate without hitting this ceiling. These restrictions prevent us from scaling our dataset or adding functionality, which is why we explored PostgreSQL.

PostgreSQL avoids these hard limits. It supports significantly wider tables (up to 1,600 columns, though we’re nowhere near that) and has a much higher index limit per table (effectively unlimited for practical purposes, constrained only by storage). My PoC confirms PostgreSQL can handle our current workload (1.23M contacts, 350GB email_stats table, 140 fields) with minimal code changes, and it allows room for growth without MySQL’s structural barriers.

On Database Design and Wide Tables

I understand your concerns about wide tables and excessive indexes not being ideal from a design perspective. Wide tables can indeed become unwieldy, and I agree that scaling them indefinitely isn’t a long-term solution. However, our goal isn’t to push PostgreSQL to its absolute limits but to overcome MySQL’s immediate constraints, which block our ability to add more fields or indexes. PostgreSQL provides breathing room to grow our dataset and optimize complex queries without resorting to heavy code modifications or workarounds that complicate future upgrades.

As for hitting PostgreSQL’s limits, that’s a valid point, but its thresholds (1,600 columns, high index capacity) are far beyond our current needs, making it a practical solution for now. If we approach those limits, we’d reassess, possibly exploring schema redesign or other databases, but that’s a future problem.

Your Custom Module Approach

Your plan to use custom modules and additional tables (potentially in separate databases) to avoid Mautic’s default schema limitations is interesting. It’s a different approach from ours, as we’re focused on minimizing core code changes to maintain compatibility with future Mautic releases. Your method could work for specific use cases, but it risks diverging from Mautic’s core design, which might complicate maintenance or integration with upstream updates. That said, I’d be curious to hear how your custom modules perform at scale—especially if you’re targeting hundreds of CPU cores and terabytes of RAM.

Regarding C/C++ for threading and performance, I understand the appeal for compute-heavy tasks. However, our bottlenecks are database-driven (query complexity, index limits), not PHP execution. Optimizing PHP or adding C/C++ wouldn’t address our core issue of MySQL’s structural limits, which is why we prioritized a database swap over code-level changes.

Testing PostgreSQL Limits

I appreciate your suggestion to test PostgreSQL’s maximum custom fields and indexes for community benefit. While I haven’t run exhaustive tests to find the exact limits, I can confirm PostgreSQL’s theoretical caps (1,600 columns, practically unlimited indexes) far exceed MySQL’s (150 fields, 64 indexes in our tests). Running a synthetic test to hit PostgreSQL’s ceiling would require significant effort beyond the scope of my PoC, which focused on proving runtime compatibility for our workload. That said, if there’s strong community interest, I could consider a limited test (e.g., adding 200–300 fields and 100 indexes) once I port the PoC to Mautic 7.0. I’ll share any results in this thread or a follow-up post.

What scale are you targeting with your Mautic instance? Are you planning to use many custom fields, or will your custom tables avoid that issue entirely? Also, since you’ve worked with Oracle, how do you see its scalability comparing to MariaDB or PostgreSQL for Mautic-like workloads?

Thanks for the discussion—looking forward to your thoughts.

Best,

Wieslaw Golec

Hello esio,

I think the Mautic concept is great and it has a huge potential.

I’m running a small production instance so I can understand its operation. And I have had time to look at the database schema and I don’t think it scales well. That’s why I’m trying to learn the problems from those attempting to run at scale so I can avoid the pitfalls if know about them in advance. I read about all the code fixes and modifications and PR etc.. Which is just good engineering practice.

I know Mautic is developed by a community of volunteers. However no commercial development team would rely on a design that uses very wide table that will easily hit a database hard limit. There are easier ways to implement custom fields that would scale. And if its been like that since M4 why has it not been fixed in M7A ? There is not even a discussion about it in the forum . Frankly I’m shocked.

I’m not planning to use many custom fields now I know about the hard limits. I personally like mariadb and I prefer not to switch to ProgressSQL. Oracle is a market leader but expensive. This is a Mautic database schema design issue that needs to be resolved by the developers for Mautic to scale well.

I just wanted to use Mautic and I wasn’t planning on customisations. However I now plan to go down the custom route where necessary as I don’t believe the developers will fix this or a number of other problems anytime soon I’m sorry to say which is sad for the project and for the whole community.

I wish you well with ProgressSQL PoC and hope the developers fix these issues soon for the benefit of everyone.

Thanks.