Moving Mautic 5 SQL DB: bash script to automate export/import to new server

I’m sharing this bash script that I had to create to export/import the Mautic 5 MySQL database to a new server. Feel free to use it to save yourself some time. It will export all tables from a local source database, fix issues which cause import to fail, and import into a destination database (could be on a different server, assuming you have opened the remote connection for your server IP).

What I thought would be a 30 minute task took over 8 hours as a result of foreign-key constraints and import errors caused by SQL generated values which blocked the import.

If this is useful to you, leave a reply so I know it helped.

#!/bin/bash
####################################################################################################
# MAUTIC 5 DATABASE EXPORT/IMPORT SCRIPT
#
# This script will export a Mautic 5 database to a new MySQL database (on the same or different server).
# It was created after 8 hours of struggle with the Mautic 5 database export/import process, as a result
# of foreign-key constraint errors and other issues that arose during the process.
#
# HOW IT WORKS:
#
# 1. It exports all database tables, one table at a time.
# 2. It fixes the mt_email_stats and mt_leads tables to remove generated columns from INSERT statements,
#    which without being fixed, caused errors that cause the import to fail.
# 3. It batches the INSERT statements into groups of 250 rows for mt_email_stats and mt_leads tables.
# 4. It imports the tables into the new database in a specific order to avoid foreign-key constraint errors.
# 5. It verifies the schema and sample data for mt_email_stats and mt_leads tables after import.
# 6. It cleans up temporary files and directories.
#
# HOW TO USE:
#
# 1. Set the Source and Destination database configuration variables.
# 2. On your source server, go into a non-public directory where the backups will be created.
# 3. Create a file named "movedb.sh", then "sudo chown +x movedb.sh" to make it executable.
# 4. Run the script with "sudo ./movedb.sh" to start the export/import process.
#
# SUGGESTED USAGE:
#
# 1. Create a test database on the destination server (e.g., "mautic5_test").
# 2. Run the script to your test database and verify the import worked without errors.
# 3. Verify all tables were created, and that table sizes match the source database.
# 4. If everything looks good:
# 4.a. Create a new database on the destination server (e.g., "mautic5_production").
# 4.b. Setup your "movedb.sh" script with configuration to be ready to run.
# 4.c. Create a duplicate of your Mautic "config/local.php" file with the new db settings, name
#    it something like "local.php.ready" so it is ready for the switch.
# 4.c. Take the website offline/into maintenance mode for ~5 minutes.
# 4.d. Pause all Campaigns in Mautic.
# 4.d. Run "sudo ./movedb.sh" to begin transfer of the production db.
# 4.e. When transfer is complete, switch "local.php.ready" to "local.php".
# 4.f. Bring website out of maintenance mode.
# 4.g. Resume all Campaigns in Mautic.
# 5. Delete the test database, "movedb.sh" script, and any remaining backup files.
#
####################################################################################################

# Source database configuration
SRC_HOST="YOUR_MYSQL_SERVER_IP"
SRC_PORT=3306
SRC_USER="YOUR_DB_USER"
SRC_PASS="YOUR_DB_PASSWORD"
SRC_DB="YOUR_DB_NAME"

# Destination database configuration
DST_HOST="DESTINATION_MYSQL_SERVER_IP"
DST_PORT=3306
DST_USER="DESTINATION_DB_USER"
DST_PASS="DESTINATION_DB_PASSWORD"
DST_DB="DESTINATION_DB_NAME"

# Helper folders used on Source Linux server for creating 
DUMP_DIR="dump_tables"
FIXED_DIR="fixed_tables"
TEMP_DIR="$FIXED_DIR/temp"

# Step 1: Export all tables from source database
echo "Exporting tables from $SRC_DB on $SRC_HOST..."

# Create directories
mkdir -p "$DUMP_DIR" "$FIXED_DIR" "$TEMP_DIR"

# Export mt_email_stats and mt_leads with one INSERT per line
for table in "mt_email_stats" "mt_leads"; do
    echo "Exporting $table with one INSERT per line..."
    mysqldump -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" --opt --single-transaction --no-create-db --skip-extended-insert "$SRC_DB" "$table" > "$DUMP_DIR/$table.sql"
    if [ $? -ne 0 ] || [ ! -s "$DUMP_DIR/$table.sql" ]; then
        echo "Error: Failed to export $table."
        exit 1
    fi
done

# Export all other tables with regular extended INSERT
echo "Exporting all other tables with extended INSERT..."
mysql -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" -N -e "SHOW TABLES FROM $SRC_DB WHERE Tables_in_$SRC_DB NOT IN ('mt_email_stats', 'mt_leads')" | while read -r table; do
    echo "Exporting table $table..."
    mysqldump -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" --opt --single-transaction --no-create-db "$SRC_DB" "$table" > "$DUMP_DIR/$table.sql"
    if [ $? -ne 0 ] || [ ! -s "$DUMP_DIR/$table.sql" ]; then
        echo "Error: Failed to export table $table."
        exit 1
    fi
done

# Verify dumps
if [ -z "$(ls -A "$DUMP_DIR")" ]; then
    echo "Error: No tables exported into $DUMP_DIR."
    exit 1
fi

# Step 2: Fix and batch mt_email_stats and mt_leads
for table in "mt_email_stats" "mt_leads"; do
    TEMP_FILE="$TEMP_DIR/${table}_temp.sql"
    BATCHED_FILE="$TEMP_DIR/${table}_batched.sql"
    
    if [ "$table" = "mt_email_stats" ]; then
        echo "Fixing generated_sent_date in $DUMP_DIR/$table.sql (removing INSERT values)..."
        sed -E \
            -e '/`generated_sent_date`/ s/`generated_sent_date`.*GENERATED ALWAYS AS (concat(year(`date_sent`),_utf8mb4.*))/`generated_sent_date` date GENERATED ALWAYS AS (concat(year(`date_sent`),'"'"'-'"'"',lpad(month(`date_sent`),2,'"'"'0'"'"'),'"'"'-'"'"',lpad(dayofmonth(`date_sent`),2,'"'"'0'"'"'))) VIRTUAL/' \
            -e '/INSERT INTO `mt_email_stats`/ s/('"'"'[0-9]{4}-[0-9]{2}-[0-9]{2}'"'"'|\(NULL\))\);$/DEFAULT);/' "$DUMP_DIR/$table.sql" > "$TEMP_FILE"
    elif [ "$table" = "mt_leads" ]; then
        echo "Fixing generated_email_domain in $DUMP_DIR/$table.sql (removing INSERT values)..."
        sed -E \
            -e "s/(([^,]*,){45})(NULL|'[^']*')/\1DEFAULT/" "$DUMP_DIR/mt_leads.sql" > "$TEMP_FILE"
    fi
    
    if [ $? -ne 0 ] || [ ! -s "$TEMP_FILE" ]; then
        echo "Error: Failed to fix $table.sql."
        exit 1
    fi

    # Verify initial fix
    if grep -q "_utf8mb4" "$TEMP_FILE"; then
        echo "Error: '_utf8mb4' still present in $table.sql."
        exit 1
    fi
    if [ "$table" = "mt_email_stats" ] && grep -q "'[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}'" "$TEMP_FILE"; then
        echo "Warning: Possible date values remain in $table.sql. Checking INSERT statements..."
        if grep "'[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}'" "$TEMP_FILE" | grep -B 1 "INSERT INTO \`$table\`" | grep -v "date_sent" | grep -v "date_read" | grep -v "last_opened" | grep -q "'[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}'"; then
            echo "Error: Date values still present in INSERT statements for generated_sent_date."
            exit 1
        fi
    fi
    if [ "$table" = "mt_leads" ] && grep -q "'[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]*'" "$TEMP_FILE"; then
        echo "Warning: Possible domain values remain in $table.sql. Checking INSERT statements..."
        if grep "'[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]*'" "$TEMP_FILE" | grep -B 1 "INSERT INTO \`$table\`" | grep -v "email" | grep -v "website" | grep -v "facebook" | grep -v "linkedin" | grep -q "'[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]*'"; then
            echo "Error: Domain values still present in INSERT statements for generated_email_domain."
            exit 1
        fi
    fi

    # Batch into groups of 250 rows
    echo "Batching $table INSERT statements into groups of 250..."
    awk '
    BEGIN { 
        count = 0; 
        values = ""; 
        header = "INSERT INTO `'"$table"'` VALUES "
    }
    /INSERT INTO `'"$table"'` VALUES/ { 
        sub(/INSERT INTO `'"$table"'` VALUES /, "")
        match($0, /\((.*)\);$/, arr)
        if (count == 0) {
            values = "(" arr[1] ")"
        } else {
            values = values ",(" arr[1] ")"
        }
        count++
        if (count == 250) {
            print header values ";"
            count = 0
            values = ""
        }
        next
    }
    END { 
        if (count > 0) {
            print header values ";"
        }
    }
    !/INSERT INTO `'"$table"'` VALUES/ { print $0 }
    ' "$TEMP_FILE" > "$BATCHED_FILE"

    if [ $? -ne 0 ] || [ ! -s "$BATCHED_FILE" ]; then
        echo "Error: Failed to batch $table.sql into $BATCHED_FILE."
        exit 1
    fi

    # Move batched file to final location
    mv "$BATCHED_FILE" "$FIXED_DIR/$table.sql"
    rm -f "$TEMP_FILE"
    echo "Successfully fixed and batched $table.sql"
done

# Step 3: Move other tables from DUMP_DIR to FIXED_DIR
echo "Moving other tables from $DUMP_DIR to $FIXED_DIR..."
for table_file in "$DUMP_DIR"/*.sql; do
    table_name=$(basename "$table_file" .sql)
    if [ "$table_name" != "mt_email_stats" ] && [ "$table_name" != "mt_leads" ]; then
        cp "$table_file" "$FIXED_DIR/$table_name.sql"
    fi
done

# Step 4: Import tables in dependency order
echo "Importing tables into $DST_DB on $DST_HOST in dependency order..."

# Define import order based on Mautic dependencies
TABLE_ORDER=(
    "mt_ip_addresses"      # No FKs
    "mt_leads"            # Referenced by mt_email_stats (lead_id)
    "mt_emails"           # Referenced by mt_email_stats (email_id)
    "mt_lead_lists"       # Referenced by mt_email_stats (list_id)
    "mt_email_copies"     # Referenced by mt_email_stats (copy_id)
    "mt_email_stats"      # Depends on above tables
)

# Import tables in specified order
for table in "${TABLE_ORDER[@]}"; do
    if [ -f "$FIXED_DIR/$table.sql" ]; then
        echo "Importing table $table..."
        mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" "$DST_DB" < "$FIXED_DIR/$table.sql"
        if [ $? -ne 0 ]; then
            echo "Error: Failed to import table $table."
            exit 1
        fi
        ROW_COUNT=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT COUNT(*) FROM \`$table\`" "$DST_DB" 2>/dev/null | grep -o '[0-9]\+' || echo "0")
        if [ "$ROW_COUNT" ]; then
            echo "Imported $table with $ROW_COUNT rows."
        else
            echo "Error: Could not verify table $table after import."
            exit 1
        fi
        # Remove from fixed_tables to avoid re-importing
        rm "$FIXED_DIR/$table.sql"
    fi
done

# Import remaining tables (alphabetical order)
for table_file in "$FIXED_DIR"/*.sql; do
    if [ -f "$table_file" ]; then
        table_name=$(basename "$table_file" .sql)
        echo "Importing table $table_name..."
        mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" "$DST_DB" < "$table_file"
        if [ $? -ne 0 ]; then
            echo "Error: Failed to import table $table_name."
            exit 1
        fi
        ROW_COUNT=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT COUNT(*) FROM \`$table_name\`" "$DST_DB" 2>/dev/null | grep -o '[0-9]\+' || echo "0")
        if [ "$ROW_COUNT" ]; then
            echo "Imported $table_name with $ROW_COUNT rows."
        else
            echo "Error: Could not verify table $table_name after import."
            exit 1
        fi
    fi
done

# Step 5: Verify mt_email_stats and mt_leads schema and sample data
for table in "mt_email_stats" "mt_leads"; do
    echo "Verifying $table schema and sample data..."
    if [ "$table" = "mt_email_stats" ]; then
        SCHEMA=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SHOW CREATE TABLE $table\G" "$DST_DB" 2>/dev/null | grep "generated_sent_date")
        if echo "$SCHEMA" | grep -q "concat(year(\`date_sent\`),'-',lpad(month(\`date_sent\`),2,'0'),'-',lpad(dayofmonth(\`date_sent\`),2,'0'))"; then
            echo "Schema for generated_sent_date looks correct."
        else
            echo "Error: generated_sent_date schema does not match expected expression."
            exit 1
        fi
        SAMPLE=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT date_sent, generated_sent_date FROM $table LIMIT 1" "$DST_DB" 2>/dev/null)
        echo "Sample data for $table: $SAMPLE"
    elif [ "$table" = "mt_leads" ]; then
        SCHEMA=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SHOW CREATE TABLE $table\G" "$DST_DB" 2>/dev/null | grep "generated_email_domain")
        if echo "$SCHEMA" | grep -q "substr(\`email\`,locate('@',\`email\`) + 1)"; then
            echo "Schema for generated_email_domain looks correct."
        else
            echo "Error: generated_email_domain schema does not match expected expression."
            exit 1
        fi
        SAMPLE=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT email, generated_email_domain FROM $table LIMIT 1" "$DST_DB" 2>/dev/null)
        echo "Sample data for $table: $SAMPLE"
    fi
done

# Step 6: Clean up temporary directory
echo "Cleaning up temporary files and directory..."
rm -rf "$TEMP_DIR"

echo "Export, fix, batch, and import of all tables completed successfully."

From a technology perspective, regarding future version of Mautic, I would suggest no longer using Generate SQL functions. These cause immense trouble when doing an export/import and there is no easy resolution for them, they have to be manually stripped for the import to work. I would suggest using PHP to calculate these values instead.

The reality is that Mautic database fills up quickly and moving it to larger servers is a common need. So this process should be made as simple as possible.

If you need even less downtime, you can expand this script with a few steps:

  1. Add configuration variables for: PATH_TO_LOCAL_CONFIG, PATH_TO_NEW_LOCAL_CONFIG.
  2. Have the bash script connect to the Source DB and stop active campaigns, using UPDATE SQL (you’ll want to first back-up the current values for each campaign to a temporary file).
  3. Run the export/import code…
  4. After import, have the script UPDATE the Campaigns to active again, on your Source and Destination servers (using your temporary file as reference).
  5. Have the bash script: (i) rename “local.php” as “local.php.bak” (PATH_TO_LOCAL_CONFIG), and (ii) rename your PATH_TO_NEW_LOCAL_CONFIG file to “local.php”.

This way, the entire process is automated with the least amount of downtime.

You can feed Grok this bash script and the list in this post and it will update the script for you.

Updated bash script with automation for Campaign pause/resume and renaming of local.php file, to reduce downtime during database move:

#!/bin/bash
####################################################################################################
# MAUTIC 5 DATABASE EXPORT/IMPORT SCRIPT
#
# This script will export a Mautic 5 database to a new MySQL database (on the same or different server).
# It was created after 8 hours of struggle with the Mautic 5 database export/import process, as a result
# of foreign-key constraint errors and other issues that arose during the process.
#
# HOW IT WORKS:
#
# 1. It stops active campaigns in the source database and backs up their statuses to a temporary file
#    (skipped if IS_TEST_MODE is TRUE).
# 2. It exports all database tables, one table at a time.
# 3. It fixes the mt_email_stats and mt_leads tables to remove generated columns from INSERT statements,
#    which without being fixed, caused errors that cause the import to fail.
# 4. It batches the INSERT statements into groups of 250 rows for mt_email_stats and mt_leads tables.
# 5. It imports the tables into the new database in a specific order to avoid foreign-key constraint errors.
# 6. It restores campaign statuses on both source and destination databases using the backup file
#    (skipped if IS_TEST_MODE is TRUE).
# 7. It verifies the schema and sample data for mt_email_stats and mt_leads tables after import.
# 8. It renames the Mautic configuration file "local.php" to "local.php.bak" and renames the new
#    configuration file (e.g., "config/local_new.php") to "local.php" (skipped if IS_TEST_MODE is TRUE).
# 9. It cleans up temporary files and directories.
#
# HOW TO USE (FOR PRODUCTION DEPLOYMENT):
#
# 1. Set the Source and Destination database configuration variables, the Mautic configuration file paths,
#    and IS_TEST_MODE to FALSE or 0.
# 2. On your source server, go into a non-public directory where the backups will be created.
# 3. Create a file named "movedb.sh", paste this bash script code, then "sudo chmod +x movedb.sh" to make
#    it executable.
# 4. Run the script with "sudo ./movedb.sh" to start the export/import process.
#
# HOW TO USE (FOR INITIAL TEST DEPLOYMENT):
#
# You can set IS_TEST_MODE to TRUE or 1 to test the transfer first without stopping active campaigns
# or renaming configuration files. Set this value in the configuration section at the top of the script.
# Follow the same steps as for production deployment, but ensure you are targeting a test database
# (e.g., "mautic5_test"). Verify the transfer before proceeding with a production deployment.
#
# SUGGESTED USAGE:
#
# 1. Create a test database on the destination server (e.g., "mautic5_test").
# 2. Run the script with IS_TEST_MODE=TRUE to your test database and verify the import worked without errors.
# 3. Verify all tables were created, and that table sizes match the source database.
# 4. If everything looks good:
# 4.a. Create a new database on the destination server (e.g., "mautic5_production").
# 4.b. Setup your "movedb.sh" script with configuration to be ready to run, setting IS_TEST_MODE=FALSE.
# 4.c. Create a duplicate of your Mautic "config/local.php" file with the new db settings, name
#    it something like "config/local_new.php" so it is ready for the switch.
# 4.d. Take the website offline/into maintenance mode for ~5 minutes.
# 4.e. Run "sudo ./movedb.sh" to begin transfer of the production db.
# 4.f. When transfer is complete, the script will automatically switch "config/local_new.php" to "local.php".
# 4.g. Bring website out of maintenance mode.
# 5. Delete the test database, "movedb.sh" script, and any remaining backup files.
#
####################################################################################################

# Test mode configuration
IS_TEST_MODE=0  # Set to 1 to skip stop/restore campaigns and rename of config files

# Source database configuration
SRC_HOST="YOUR_MYSQL_SERVER_IP"
SRC_PORT=3306
SRC_USER="YOUR_DB_USER"
SRC_PASS="YOUR_DB_PASSWORD"
SRC_DB="YOUR_DB_NAME"

# Destination database configuration
DST_HOST="DESTINATION_MYSQL_SERVER_IP"
DST_PORT=3306
DST_USER="DESTINATION_DB_USER"
DST_PASS="DESTINATION_DB_PASSWORD"
DST_DB="DESTINATION_DB_NAME"

# Mautic configuration file paths
PATH_TO_LOCAL_CONFIG="config/local.php"
PATH_TO_NEW_LOCAL_CONFIG="config/local_new.php"

# Helper folders used on Source Linux server for creating 
DUMP_DIR="dump_tables"
FIXED_DIR="fixed_tables"
TEMP_DIR="$FIXED_DIR/temp"
CAMPAIGN_BACKUP_FILE="$TEMP_DIR/campaign_status_backup.sql"

# Function to restore campaign statuses on source database
restore_source_campaigns() {
    if [ -f "$CAMPAIGN_BACKUP_FILE" ]; then
        echo "Restoring campaign statuses on source database..."
        while IFS=$'\t' read -r id is_published; do
            mysql -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" -e "UPDATE $SRC_DB.mt_campaigns SET is_published = $is_published WHERE id = $id;"
            if [ $? -ne 0 ]; then
                echo "Error: Failed to restore campaign ID $id on source database."
                return 1
            fi
        done < "$CAMPAIGN_BACKUP_FILE"
        echo "Campaign statuses restored on source database."
    else
        echo "Warning: Campaign backup file not found. Cannot restore source campaign statuses."
    fi
    return 0
}

# Error handling function
handle_error() {
    echo "An error occurred. Exiting script..."
    # Disable trap to prevent recursive loop during campaign restoration
    trap - ERR
    # Only attempt to restore campaigns if backup file exists (indicating we passed the campaign stop step)
    if [ -f "$CAMPAIGN_BACKUP_FILE" ]; then
        restore_source_campaigns
        if [ $? -ne 0 ]; then
            echo "Error: Campaign restoration failed. Manual restoration required."
        fi
        # Delete campaign backup file after use
        rm -f "$CAMPAIGN_BACKUP_FILE"
        if [ $? -ne 0 ]; then
            echo "Warning: Failed to delete campaign backup file $CAMPAIGN_BACKUP_FILE."
        fi
    fi
    # Clean up temporary directory
    echo "Cleaning up temporary files and directory..."
    rm -rf "$TEMP_DIR"
    exit 1
}

# Set trap to catch errors and signals
trap handle_error ERR

# Step 1: Stop active campaigns on source database and back up their status (skipped in test mode)
echo "Backing up campaign statuses and stopping active campaigns on $SRC_DB..."
mkdir -p "$TEMP_DIR"

if [ "$IS_TEST_MODE" = "1" ] || [ "$IS_TEST_MODE" = "TRUE" ]; then
    echo "Test mode enabled. Skipping campaign stop and backup."
else
    # Create the campaign backup file first
    touch "$CAMPAIGN_BACKUP_FILE"
    if [ $? -ne 0 ]; then
        echo "Error: Failed to create campaign backup file at $CAMPAIGN_BACKUP_FILE."
        exit 1
    fi

    # Backup current campaign statuses
    mysql -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" -e "SELECT id, is_published INTO OUTFILE '$CAMPAIGN_BACKUP_FILE' FROM $SRC_DB.mt_campaigns WHERE is_published = 1;"
    if [ $? -ne 0 ]; then
        echo "Error: Failed to back up campaign statuses."
        exit 1
    fi

    # Stop active campaigns
    mysql -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" -e "UPDATE $SRC_DB.mt_campaigns SET is_published = 0 WHERE is_published = 1;"
    if [ $? -ne 0 ]; then
        echo "Error: Failed to stop active campaigns."
        exit 1
    fi
    echo "Active campaigns stopped on source database."
fi

# Step 2: Export all tables from source database
echo "Exporting tables from $SRC_DB on $SRC_HOST..."

# Create directories
mkdir -p "$DUMP_DIR" "$FIXED_DIR" "$TEMP_DIR"

# Export mt_email_stats and mt_leads with one INSERT per line
for table in "mt_email_stats" "mt_leads"; do
    echo "Exporting $table with one INSERT per line..."
    mysqldump -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" --opt --single-transaction --no-create-db --skip-extended-insert "$SRC_DB" "$table" > "$DUMP_DIR/$table.sql"
    if [ $? -ne 0 ] || [ ! -s "$DUMP_DIR/$table.sql" ]; then
        echo "Error: Failed to export $table."
        exit 1
    fi
done

# Export all other tables with regular extended INSERT
echo "Exporting all other tables with extended INSERT..."
mysql -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" -N -e "SHOW TABLES FROM $SRC_DB WHERE Tables_in_$SRC_DB NOT IN ('mt_email_stats', 'mt_leads')" | while read -r table; do
    echo "Exporting table $table..."
    mysqldump -h "$SRC_HOST" -P "$SRC_PORT" -u "$SRC_USER" -p"$SRC_PASS" --opt --single-transaction --no-create-db "$SRC_DB" "$table" > "$DUMP_DIR/$table.sql"
    if [ $? -ne 0 ] || [ ! -s "$DUMP_DIR/$table.sql" ]; then
        echo "Error: Failed to export table $table."
        exit 1
    fi
done

# Verify dumps
if [ -z "$(ls -A "$DUMP_DIR")" ]; then
    echo "Error: No tables exported into $DUMP_DIR."
    exit 1
fi

# Step 3: Fix and batch mt_email_stats and mt_leads
for table in "mt_email_stats" "mt_leads"; do
    TEMP_FILE="$TEMP_DIR/${table}_temp.sql"
    BATCHED_FILE="$TEMP_DIR/${table}_batched.sql"
    
    if [ "$table" = "mt_email_stats" ]; then
        echo "Fixing generated_sent_date in $DUMP_DIR/$table.sql (removing INSERT values)..."
        sed -E \
            -e '/`generated_sent_date`/ s/`generated_sent_date`.*GENERATED ALWAYS AS (concat(year(`date_sent`),_utf8mb4.*))/`generated_sent_date` date GENERATED ALWAYS AS (concat(year(`date_sent`),'"'"'-'"'"',lpad(month(`date_sent`),2,'"'"'0'"'"'),'"'"'-'"'"',lpad(dayofmonth(`date_sent`),2,'"'"'0'"'"'))) VIRTUAL/' \
            -e '/INSERT INTO `mt_email_stats`/ s/('"'"'[0-9]{4}-[0-9]{2}-[0-9]{2}'"'"'|\(NULL\))\);$/DEFAULT);/' "$DUMP_DIR/$table.sql" > "$TEMP_FILE"
    elif [ "$table" = "mt_leads" ]; then
        echo "Fixing generated_email_domain in $DUMP_DIR/$table.sql (removing INSERT values)..."
        sed -E \
            -e "s/(([^,]*,){45})(NULL|'[^']*')/\1DEFAULT/" "$DUMP_DIR/mt_leads.sql" > "$TEMP_FILE"
    fi
    
    if [ $? -ne 0 ] || [ ! -s "$TEMP_FILE" ]; then
        echo "Error: Failed to fix $table.sql."
        exit 1
    fi

    # Verify initial fix
    if grep -q "_utf8mb4" "$TEMP_FILE"; then
        echo "Error: '_utf8mb4' still present in $table.sql."
        exit 1
    fi
    if [ "$table" = "mt_email_stats" ] && grep -q "'[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}'" "$TEMP_FILE"; then
        echo "Warning: Possible date values remain in $table.sql. Checking INSERT statements..."
        if grep "'[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}'" "$TEMP_FILE" | grep -B 1 "INSERT INTO \`$table\`" | grep -v "date_sent" | grep -v "date_read" | grep -v "last_opened" | grep -q "'[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}'"; then
            echo "Error: Date values still present in INSERT statements for generated_sent_date."
            exit 1
        fi
    fi
    if [ "$table" = "mt_leads" ] && grep -q "'[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]*'" "$TEMP_FILE"; then
        echo "Warning: Possible domain values remain in $table.sql. Checking INSERT statements..."
        if grep "'[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]*'" "$TEMP_FILE" | grep -B 1 "INSERT INTO \`$table\`" | grep -v "email" | grep -v "website" | grep -v "facebook" | grep -v "linkedin" | grep -q "'[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]*'"; then
            echo "Error: Domain values still present in INSERT statements for generated_email_domain."
            exit 1
        fi
    fi

    # Batch into groups of 250 rows
    echo "Batching $table INSERT statements into groups of 250..."
    awk '
    BEGIN { 
        count = 0; 
        values = ""; 
        header = "INSERT INTO `'"$table"'` VALUES "
    }
    /INSERT INTO `'"$table"'` VALUES/ { 
        sub(/INSERT INTO `'"$table"'` VALUES /, "")
        match($0, /\((.*)\);$/, arr)
        if (count == 0) {
            values = "(" arr[1] ")"
        } else {
            values = values ",(" arr[1] ")"
        }
        count++
        if (count == 250) {
            print header values ";"
            count = 0
            values = ""
        }
        next
    }
    END { 
        if (count > 0) {
            print header values ";"
        }
    }
    !/INSERT INTO `'"$table"'` VALUES/ { print $0 }
    ' "$TEMP_FILE" > "$BATCHED_FILE"

    if [ $? -ne 0 ] || [ ! -s "$BATCHED_FILE" ]; then
        echo "Error: Failed to batch $table.sql into $BATCHED_FILE."
        exit 1
    fi

    # Move batched file to final location
    mv "$BATCHED_FILE" "$FIXED_DIR/$table.sql"
    rm -f "$TEMP_FILE"
    echo "Successfully fixed and batched $table.sql"
done

# Step 4: Move other tables from DUMP_DIR to FIXED_DIR
echo "Moving other tables from $DUMP_DIR to $FIXED_DIR..."
for table_file in "$DUMP_DIR"/*.sql; do
    table_name=$(basename "$table_file" .sql)
    if [ "$table_name" != "mt_email_stats" ] && [ "$table_name" != "mt_leads" ]; then
        cp "$table_file" "$FIXED_DIR/$table_name.sql"
    fi
done

# Step 5: Import tables in dependency order
echo "Importing tables into $DST_DB on $DST_HOST in dependency order..."

# Define import order based on Mautic dependencies
TABLE_ORDER=(
    "mt_ip_addresses"      # No FKs
    "mt_leads"            # Referenced by mt_email_stats (lead_id)
    "mt_emails"           # Referenced by mt_email_stats (email_id)
    "mt_lead_lists"       # Referenced by mt_email_stats (list_id)
    "mt_email_copies"     # Referenced by mt_email_stats (copy_id)
    "mt_email_stats"      # Depends on above tables
)

# Import tables in specified order
for table in "${TABLE_ORDER[@]}"; do
    if [ -f "$FIXED_DIR/$table.sql" ]; then
        echo "Importing table $table..."
        mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" "$DST_DB" < "$FIXED_DIR/$table.sql"
        if [ $? -ne 0 ]; then
            echo "Error: Failed to import table $table."
            exit 1
        fi
        ROW_COUNT=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT COUNT(*) FROM \`$table\`" "$DST_DB" 2>/dev/null | grep -o '[0-9]\+' || echo "0")
        if [ "$ROW_COUNT" ]; then
            echo "Imported $table with $ROW_COUNT rows."
        else
            echo "Error: Could not verify table $table after import."
            exit 1
        fi
        # Remove from fixed_tables to avoid re-importing
        rm "$FIXED_DIR/$table.sql"
    fi
done

# Import remaining tables (alphabetical order)
for table_file in "$FIXED_DIR"/*.sql; do
    if [ -f "$table_file" ]; then
        table_name=$(basename "$table_file" .sql)
        echo "Importing table $table_name..."
        mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" "$DST_DB" < "$table_file"
        if [ $? -ne 0 ]; then
            echo "Error: Failed to import table $table_name."
            exit 1
        fi
        ROW_COUNT=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT COUNT(*) FROM \`$table_name\`" "$DST_DB" 2>/dev/null | grep -o '[0-9]\+' || echo "0")
        if [ "$ROW_COUNT" ]; then
            echo "Imported $table_name with $ROW_COUNT rows."
        else
            echo "Error: Could not verify table $table_name after import."
            exit 1
        fi
    fi
done

# Step 6: Restore campaign statuses on source and destination databases (skipped in test mode)
echo "Restoring campaign statuses on $SRC_DB and $DST_DB..."

if [ "$IS_TEST_MODE" = "1" ] || [ "$IS_TEST_MODE" = "TRUE" ]; then
    echo "Test mode enabled. Skipping campaign restoration."
else
    # Restore campaigns on source database
    restore_source_campaigns
    if [ $? -ne 0 ]; then
        echo "Error: Failed to restore campaigns on source database."
        exit 1
    fi

    # Restore campaigns on destination database
    if [ -f "$CAMPAIGN_BACKUP_FILE" ]; then
        while IFS=$'\t' read -r id is_published; do
            mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "UPDATE $DST_DB.mt_campaigns SET is_published = $is_published WHERE id = $id;"
            if [ $? -ne 0 ]; then
                echo "Error: Failed to restore campaign ID $id on destination database."
                exit 1
            fi
        done < "$CAMPAIGN_BACKUP_FILE"
        echo "Campaign statuses restored on destination database."
    else
        echo "Warning: Campaign backup file not found. Skipping destination campaign restoration."
    fi

    # Delete campaign backup file after use
    rm -f "$CAMPAIGN_BACKUP_FILE"
    if [ $? -ne 0 ]; then
        echo "Warning: Failed to delete campaign backup file $CAMPAIGN_BACKUP_FILE."
    fi
fi

# Step 7: Verify mt_email_stats and mt_leads schema and sample data
for table in "mt_email_stats" "mt_leads"; do
    echo "Verifying $table schema and sample data..."
    if [ "$table" = "mt_email_stats" ]; then
        SCHEMA=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SHOW CREATE TABLE $table\G" "$DST_DB" 2>/dev/null | grep "generated_sent_date")
        if echo "$SCHEMA" | grep -q "concat(year(\`date_sent\`),'-',lpad(month(\`date_sent\`),2,'0'),'-',lpad(dayofmonth(\`date_sent\`),2,'0'))"; then
            echo "Schema for generated_sent_date looks correct."
        else
            echo "Error: generated_sent_date schema does not match expected expression."
            exit 1
        fi
        SAMPLE=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT date_sent, generated_sent_date FROM $table LIMIT 1" "$DST_DB" 2>/dev/null)
        echo "Sample data for $table: $SAMPLE"
    elif [ "$table" = "mt_leads" ]; then
        SCHEMA=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SHOW CREATE TABLE $table\G" "$DST_DB" 2>/dev/null | grep "generated_email_domain")
        if echo "$SCHEMA" | grep -q "substr(\`email\`,locate('@',\`email\`) + 1)"; then
            echo "Schema for generated_email_domain looks correct."
        else
            echo "Error: generated_email_domain schema does not match expected expression."
            exit 1
        fi
        SAMPLE=$(mysql -h "$DST_HOST" -P "$DST_PORT" -u "$DST_USER" -p"$DST_PASS" -e "SELECT email, generated_email_domain FROM $table LIMIT 1" "$DST_DB" 2>/dev/null)
        echo "Sample data for $table: $SAMPLE"
    fi
done

# Step 8: Rename configuration files (skipped in test mode)
echo "Renaming configuration files..."
if [ "$IS_TEST_MODE" = "1" ] || [ "$IS_TEST_MODE" = "TRUE" ]; then
    echo "Test mode enabled. Skipping configuration file renaming."
else
    if [ -f "$PATH_TO_LOCAL_CONFIG" ]; then
        mv "$PATH_TO_LOCAL_CONFIG" "${PATH_TO_LOCAL_CONFIG}.bak"
        if [ $? -ne 0 ]; then
            echo "Error: Failed to rename $PATH_TO_LOCAL_CONFIG to ${PATH_TO_LOCAL_CONFIG}.bak."
            exit 1
        fi
        echo "Renamed $PATH_TO_LOCAL_CONFIG to ${PATH_TO_LOCAL_CONFIG}.bak."
    else
        echo "Warning: $PATH_TO_LOCAL_CONFIG not found. Skipping backup."
    fi

    if [ -f "$PATH_TO_NEW_LOCAL_CONFIG" ]; then
        mv "$PATH_TO_NEW_LOCAL_CONFIG" "$PATH_TO_LOCAL_CONFIG"
        if [ $? -ne 0 ]; then
            echo "Error: Failed to rename $PATH_TO_NEW_LOCAL_CONFIG to $PATH_TO_LOCAL_CONFIG."
            exit 1
        fi
        echo "Renamed $PATH_TO_NEW_LOCAL_CONFIG to $PATH_TO_LOCAL_CONFIG."
    else
        echo "Error: $PATH_TO_NEW_LOCAL_CONFIG not found. Cannot proceed with configuration switch."
        exit 1
    fi
fi

# Step 9: Clean up temporary directory
echo "Cleaning up temporary files and directory..."
rm -rf "$TEMP_DIR"

echo "Export, fix, batch, import, campaign restoration, and configuration switch completed successfully."
2 Likes