Overview
Let’s say you are running Spark workloads in Microsoft Fabric. You have committed capacity (CUs) to handle your usual work like BI, data transformation, and dashboards. But Spark jobs can spike unpredictably. If you tie Spark to that shared capacity, those spikes can starve your regular workloads or force you to overprovision capacity, which wastes money.
Autoscale Billing for Spark solves this problem by letting Spark operate independently. Once enabled, serverless Spark does not consume your assigned Fabric capacity units. Instead, each Spark job spins up on its own dedicated resources. You are billed per use, and your other workloads are unaffected.
In short: turn it on, forget about it, and let Spark run without disrupting everything else.
How Serverless Spark Works Without Using Fabric Capacity Units
Here is what happens under the hood:
Read More