Deploying Azure Databricks with a Virtual Network and Network Security Group

Colby T. Ford, Ph.D.
4 min readJul 13, 2022

Deploying Databricks in Azure is easy. Deploying Databricks in a super locked-down, ‘make your security group happy’ kind of way…not as easy.

In some enterprise cloud architectures, it’s a common practice to deploy many cloud services behind private endpoints and ensure that your data isn’t traveling across the public web (even if it’s encrypted, etc.) Unfortunately, you can’t deploy Databricks itself behind a private endpoint, but you can allow it to talk to your other resources that are.

The way Databricks communicates with other Azure-based resources is generally considered secure, but to be compliant with your company’s security policies, you may need to complete a few extra steps to further lock it down.

Note: All Azure Databricks network traffic between the data plane VNet and the Azure Databricks control plane goes across the Microsoft network backbone, not the public Internet. This is true even if secure cluster connectivity is disabled.

From the Azure Portal UI for Databricks, you can deploy Databricks in your own VNet, but that VNet must already exist. There is no “Create new” option as you see for other resources.

Alternatively, you can deploy Databricks as-is and it will create its own VNet. Then, from the Virtual Network Peerings option under Settings, you can peer this newly created VNet with a custom one. However, this may not make your cloud security team happy as peerings are harder to manage compared to having main VNets that can be monitored and customized more directly.

If neither of these options are what you need, fear not! I’ve come bearing another option: a shiny new ARM template 💪.

Lending a Helping…ARM?

I’ve created a custom ARM template that you can use to provision a Databricks workspace in a new VNet and with a Network Security Group (NSG).

Deploying from an ARM Template

In the search bar at the top of the Azure Portal, search for “deploy” and click on the Deploy a custom template option.

Then, on the Custom deployment screen, click on the ✏ Build your own template in the editor link.

On the Edit template screen, either upload the ARM template JSON file (using the ⬆ Load file button) or copy and paste the text in the editor.

After clicking the Save button at the bottom of the screen, you should see a familiar form where you can now specify the name of your VNet and NSG (and the subnets).

Clicking the Review + create button at the bottom will start the deployment of the Databricks workspace, the new VNet, NSG, etc.

Clicking the Visualize button will show a diagram of the resources to be deployed.

Voilà! In just a few minutes, you’ll be able to use your new, securely-deployed Databricks workspace.

Bonus: Specifying the name of the Managed Resource Group

When you deploy Azure Databricks, it will create a Databricks Workspace in the Resource Group that you specify. In addition, it will create a Managed Resource Group that the platform will use to house other services for the workspace. However, the default name generation is, well, awful.

This dynamically-generated name looks something like this: databricks-rg-<workspacename>-<randomcharacters> (e.g., databricks-rg-mydatabricksws-sghxzs2sixtdk). This naming may not match your company’s naming conventions or policies and thus you may wish to customize it.

Using the ARM template above, you can now customize this name!

What’s so imPORTant about this?

When Databricks goes to spin up a cluster, it will provision multiple virtual machines that are all connected in a VNet. These machines need to communicate with one another and to other services to perform the Apach Spark tasks. If your NSG doesn’t have the appropriate firewall exceptions (open ports), they won’t be able to. So, ensuring these delegated rules are defined correctly is literally the difference between being able to use Databricks and not.

The standard Databricks Inbound and Outbound Security Rules in a NSG

Stay curious…

Colby T. Ford, Ph.D.

Cloud genomics and AI guy and aspiring polymath. I am a recovering academic from machine learning and bioinformatics and I sometimes write things here.