Error Medic

Troubleshooting Salesforce Data Migration Failures and Timeout Errors

Fix Salesforce data migration timeouts, slow performance, and configuration crashes. Learn how to optimize bulk API jobs and resolve UNABLE_TO_LOCK_ROW errors.

Last updated:
Last verified:
1,289 words
Key Takeaways
  • UNABLE_TO_LOCK_ROW errors during migration are caused by concurrent updates to the same parent record (e.g., Accounts when inserting Contacts).
  • Salesforce timeout and slow performance during bulk loads usually stem from complex sharing rules, excessive automation (Triggers/Flows), or unindexed queries.
  • Disable complex triggers, validation rules, and workflows temporarily during massive data migrations to prevent Salesforce crashes and limit apex CPU time exceeded errors.
  • Use the Bulk API 2.0 instead of REST API for large datasets, and ensure your CSV files are sorted by Parent ID to minimize lock contention.
Salesforce Data Migration Methods Compared
MethodWhen to UseTimeRisk
Data Import WizardSmall loads (<50k records), simple relationshipsFast SetupLow
Data Loader (SOAP/REST)Medium loads (50k - 5M), complex mappingsModerateMedium (Timeouts possible)
Bulk API 2.0Large enterprise migrations (>5M records)High Setup, Fast ExecutionHigh (Requires careful chunking)
ETL Tools (MuleSoft, Boomi)Continuous sync, transformations requiredLong SetupLow (Built-in retry logic)

Understanding Salesforce Data Migration Failures

When undertaking a Salesforce data migration, enterprise teams frequently encounter performance bottlenecks, timeouts, and outright crashes. These issues typically manifest as UNABLE_TO_LOCK_ROW, Apex CPU time limit exceeded, or REQUEST_RUNNING_TOO_LONG errors. Understanding the underlying Salesforce architecture, specifically its multi-tenant limits and execution context, is crucial for successful troubleshooting.

Diagnosing 'UNABLE_TO_LOCK_ROW'

The UNABLE_TO_LOCK_ROW error is the most common cause of Salesforce data migration failures, particularly when using the Bulk API with parallel processing.

The Root Cause: Salesforce locks records to prevent data integrity issues during updates. When inserting child records (like Contacts or Opportunities), Salesforce implicitly locks the parent record (the Account) to update roll-up summary fields and calculate sharing rules. If multiple batches attempt to insert child records belonging to the same parent simultaneously, one batch will acquire the lock, and the others will wait. If the wait exceeds 10 seconds, the transaction fails with UNABLE_TO_LOCK_ROW.

Diagnosis:

  1. Review your error logs from Data Loader or your ETL tool.
  2. Identify if the failing records share the same parent ID (e.g., AccountId).
  3. Check if your Bulk API job is set to 'Parallel' mode.

Step 1: Mitigating Lock Contention

To fix row lock errors, you must reduce contention.

  • Sort your Source Data: Before uploading, sort your CSV file by the Parent ID. This ensures that all child records for a given parent are processed in the same batch, eliminating parallel contention for that parent's lock.
  • Switch to Serial Mode: If sorting is not feasible or doesn't resolve the issue, configure your Bulk API job to run in 'Serial' mode instead of 'Parallel'. While this extends the overall migration time, it guarantees that batches are processed sequentially, completely avoiding lock contention.
  • Reduce Batch Size: Lower the batch size from the default (often 200 or 10,000 for Bulk) to a smaller number like 50 or 100. This reduces the duration of the lock held by any single transaction.

Step 2: Resolving 'Apex CPU time limit exceeded' and Slow Performance

Salesforce enforces a strict CPU time limit per transaction (10,000 milliseconds for synchronous, 60,000 for asynchronous). During a data migration, triggers, flows, workflow rules, and validation rules all consume this precious CPU time. If your Salesforce configuration is overly complex, simply inserting records will cause the system to crash or timeout.

The Fix: Defer Automation

The golden rule of enterprise data migration is to disable non-essential automation during the load.

  1. Triggers: Create a hierarchical Custom Setting or Custom Metadata Type (e.g., Bypass_Triggers__c) and check this value at the start of every Apex Trigger. Set this to true for the migration user.
  2. Flows and Process Builder: Deactivate flows that are triggered on creation/update of the target objects. If deactivation isn't possible, add a condition to the flow's start criteria to bypass the integration user's profile.
  3. Validation Rules: Deactivate validation rules temporarily. Cleanse your data before importing it, rather than relying on Salesforce to catch errors during the migration.

Step 3: Optimizing Sharing Calculation

Salesforce sharing recalculations can severely degrade performance and cause timeouts. When you insert a record with an owner, Salesforce must compute who else can see it based on the role hierarchy, sharing rules, and manual shares.

Troubleshooting Sharing Delays:

  • Defer Sharing Calculations: In Salesforce Setup, navigate to 'Defer Sharing Calculations' and suspend them before starting a massive load. Remember to resume and recalculate sharing once the migration is complete.
  • Public Read/Write: If acceptable from a security standpoint during the migration window, temporarily set the Organization-Wide Defaults (OWD) for the target object to Public Read/Write. This dramatically speeds up inserts by bypassing complex sharing evaluations.

Step 4: Indexing and Skew

Data skew occurs when an abnormally large number of child records (e.g., >10,000) are associated with a single parent record, or when a single user owns a massive number of records (Ownership Skew).

  • Identify Skew: Query your existing data to identify Accounts with excessive Contacts or Opportunities. Restructure the data if possible before migration.
  • Custom Indexes: If you are using External IDs for upsert operations, ensure those fields are marked as 'External ID' and 'Unique'. Salesforce automatically indexes these. However, if your queries or migrations filter on non-indexed text fields, request Salesforce Support to add a custom index to improve performance and prevent REQUEST_RUNNING_TOO_LONG errors.

Frequently Asked Questions

bash
# Diagnostic SFDX command to check for Apex execution time limits
sfdx force:apex:log:list -u migration_user@enterprise.com | grep 'System.LimitException'

# Check the status of Bulk API jobs that might be hanging or failing
sfdx force:data:bulk:status -i 750xx0000000001 -u migration_user@enterprise.com

# Example of sorting a CSV by AccountID (column 2) before upload to prevent lock contention
sort -t, -k2,2 -o sorted_contacts.csv original_contacts.csv

# Python snippet to quickly check for Account data skew before migration
# Requires simple-salesforce library
# from simple_salesforce import Salesforce
# sf = Salesforce(username='user', password='pwd', security_token='token')
# skew_check = sf.query("SELECT AccountId, COUNT(Id) cnt FROM Contact GROUP BY AccountId HAVING COUNT(Id) > 10000")
# print(skew_check)
E

Error Medic Editorial

Error Medic Editorial comprises senior SREs, DevOps practitioners, and Salesforce Architects dedicated to providing actionable, code-first solutions for enterprise system failures and integration bottlenecks.

Sources

Related Articles in Salesforce

Explore More Enterprise Software Guides