Apache Spark Resource Configuration
Apache Spark's resource configuration remains one of the most challenging aspects of operating data pipelines at scale. Theoretical best practices are widely available, but production deployments often require adjustments to accommodate real-world constraints. This guide bridges that gap, exploring how to properly size Spark resources—from executors to partitions—while identifying common failure patterns and strategies to address them in production.
The Baseline Configuration
Consider a typical Spark job processing 1TB of data. A standard recommended setup might include:
- A cluster of 20 nodes, each with 32 cores and 256GB RAM
- Effective capacity of 28 cores and 240GB RAM per node after system overhead
- 4 executors per node (80 total executors)
- 7 cores per executor (with 1 core reserved for overhead)
- 56GB RAM per executor
- ~128MB partition sizes for optimal parallelism
While this configuration serves as a solid starting point, production workloads rarely conform to such clean boundaries. Let's examine some common failure patterns and mitigation strategies.When Reality Hits: Failure Patterns and Solutions
Failure Pattern #1: Workload Evolution Requiring Infrastructure Changes
A typical scenario: A job that previously ran efficiently on 20 nodes begins to experience increasing memory pressure or extended runtimes, despite configuration adjustments. Signs of resource constraints include:
- Consistently high GC time across executors (>15% of executor runtime)
- Storage fraction frequently dropping below 0.3
- Executor memory usage consistently above 85%
- Stage attempts failing despite conservative memory settings
Root cause analysis approach:
- Analyze growth patterns in your data volume and complexity.
- Profile representative jobs to understand resource bottlenecks.
Key scaling triggers:
- CPU-bound: When average CPU utilization stays above 80% for most of the job duration.
- Memory-bound: When GC time exceeds 15% or OOM errors occur despite tuning.
- I/O-bound: When shuffle spill exceeds 20% of executor memory.
If CPU-bound (high CPU utilization, low wait times):
- First try increasing cores per executor.
- If insufficient, add nodes while maintaining a similar cores/node ratio.
If memory-bound (Out Of Memory - OOM):
- First try reducing executors per node to allocate more memory per executor.
- If insufficient, add nodes with higher memory configurations.
Failure Pattern #2: Memory Exhaustion In Compute Heavy Operations
A typical scenario: Your job runs fine for many days but then suddenly fails with Out Of Memory (OOM) errors. Investigation reveals that during month-end processing, certain joins produce intermediate results 5-10x larger than your input data. The executor memory gets exhausted trying to handle these large shuffles.A possible solution would be to update the configuration to:
- spark.executor.memoryOverhead: 25% (increased from default 10%)
- spark.memory.fraction: 0.75 (decreased from default 0.6)
These settings help because they:- Reserve more memory for off-heap operations (shuffles, network buffers)- Reduce the fraction of memory used for caching, giving more to execution- Allow GC to reclaim memory more aggressively
Failure Pattern #3: Data Skew, The Silent Killer
A typical scenario: Your daily aggregation job suddenly takes 4 hours instead of 1 hour. Investigation shows that 90% of the data is going to 10% of the partitions. Common culprits:- Timestamp-based keys clustering around business hours- Geographic data concentrated in major cities- Business IDs with vastly different activity levelsBefore implementing solutions, quantify your skew:
- Monitor partition sizes through the Spark UI
- Track duration variation across tasks within the same stage
- Look for orders of magnitude differences in partition sizes
A possible solution would be to analyze your key distribution and for known skewed keys, implement pre-processing like so:// For timestamp skewval smoothed_key = concat(date_col, hash(minute_col) % 10)// For business ID skewval salted_key = concat(business_id, hash(row_number) % 5)Using Spark’s built-in skew handling helps, but understanding the specific skew of your data is more robust and lasting. Spark’s skew handling configurations:
- spark.sql.adaptive.enabled: true
- spark.sql.adaptive.skewJoin.enabled: true
Failure Pattern #4: Resource Starvation in Mixed Workloads
A typical scenario: A seemingly well-configured job starts showing erratic behavior—some stages complete quickly while others seem stuck, executors appear underutilized despite high load, and the overall job progress becomes unpredictable. This is a typical case of resource starvation occurring within a single application.
- Late stages in complex DAGs struggle to get resources
- Shuffle operations become bottlenecks
- Some executors are overwhelmed while others sit idle
- Task attempts timeout and retry repeatedly
The root cause often lies in complex transformation chains: sqlCopydata.join(lookup1).groupBy("key1").agg(...).join(lookup2).groupBy("key2").agg(...)Each transformation creates intermediate results that compete for resources. Without proper management, earlier stages can hog resources, starving later stages.Possible solutions include:
- Dividing compute-intensive jobs into smaller jobs that use resources more predictably.
- If splitting a large job isn’t possible, using checkpoints and persist methods to better divide a single job into distinct parts. (expect a future blog post on these methods)
- Applying Spark Shuffle management - setting spark.dynamicAllocation.shuffleTracking.enabled and spark.shuffle.service.enabled to true.
Conclusions & The Path Forward
We've found that most Spark issues manifest first as performance degradation before becoming outright failures. The goal of a data engineering team isn't to prevent all issues but to catch and address them before they impact production stability. While adding resources can sometimes help, precise optimization and proper monitoring often provide more sustainable solutions. Spark offers a robust set of job management tools and settings, but addressing problems through standard Spark configurations alone often proves insufficient.The Flarion platform transforms this landscape in two key ways: through significant workload acceleration that reduces resource requirements and minimizes garbage collection overhead, and by providing enhanced visibility into Spark deployments. This combination of speed and improved observability enables engineering teams to identify potential issues before they escalate into failures, shifting from reactive troubleshooting to proactive optimization. As a result, data engineering teams experience both reduced failure rates and decreased operational burden, creating a more stable and efficient production environment.