Integrity Constraint Violation

TL:Dr I have a bunch of contacts and segments and minimal campaigns running on a nice server. Integrity Constraint Violations are exceptions that kill the segment update cron and it’s breaking the bone that holds all my curse words in.

Looking for guidance on managing Integrity Constrain Violations.
My environment:
Mautic 3.1.1
Php 7.3
Mariadb 10.3.2
AMD 16 Core dedicated server and 64Gb ram
Centos7

I have 9,692,499 contacts current
I’m running 20 simultaneous imports of 5000 line files. About 10,000 uploads to go.

I have 148 segments ranging from simple industry sector segments to incorrectly formatted lead imports that I delete via campaign.

Handful of maintenance campaigns. Deleting anything that isn’t shinola, or moving contacts that match a combo of lead fields.

The problem I’m having is Integrity Constraint Violations if segments are updating via cron, campaigns are updating and campaigns are running. I haven’t nailed the timing of all this. If a lead belongs to multiple segments and it’s subsequently deleted via a quality control campaign then a segment update appears to throw an integrity constraint violation and kills the entire segment update.

My questions are: can the exception be logged and ignored so the job can continue and if so how would one go about that. Second question if ignoring these specific violations cannot be accomplished without significant heartache what would be some recommended timings of

mautic:segment:update
mautic:campaign:update
mautic:campaign:trigger

So I can process my contacts more efficiently.

I’ve fine tuned my.cf to squeeze and much stable performance possible out of Mariadb with daily mysqltuner runs to triple check that performance and stability. I log slow queries, and have debug = true in my local.php so I can tail -f my mautic logs on screen to watch the workflows.
Php max parameters are generously set.

I don’t think this is a performance issue necessarily. Just bad timing that consistently throws this error and creates even larger workloads the next time a segment update is run.

All replies welcome

@EJL

Seems you have bigger headaches than I have.

The integrity violation seems to point to a foreign key that does not exist in the database.

Try:
mautic:segment:update --force

Timing sounds like it could be an issue when you get to importing so many records. I would run each command from the terminal and time how long it takes to ge the Cron timing more aligned.

I used the -f option and played with batch limits during testing. I decided this is a moving target no matter what. When mautic:segment:update runs sequentially through my segments the failure midway left the remaining segments in state.

My solution thus far has been mautic:segment:update —list-id (important segment) -f -vvv . I prioritize my quality control segments and run them more frequently and separate from the rest.

That seems like a reasonable solution to break the task up into smaller chunks. From my reading it seems cron commands can run many minutes if they run large datasets. I’m still working with smaller amounts of data. But may not be for long because I’m launching something that would accumulate many leads quickly.

Batch Size implications under the hood:

Doctrine supports batch insert transactions. So depending on how Mautic is coding it, this should be relatively efficient in terms of disk writes.

I think Mautic’s batch size actually allows you to configure this to your environment for the Doctrine backend

Bulk inserts in Doctrine are best performed in batches, taking advantage of the transactional write-behind behavior of an EntityManager . The following code shows an example for inserting 10000 objects with a batch size of 20. You may need to experiment with the batch size to find the size that works best for you. Larger batch sizes mean more prepared statement reuse internally but also mean more work during flush .

Seems to suggest in Mautic’s context that larger batches will have more load at the tail end of the console command. So smaller more frequent batches may be your answer here as well.