Skip to content

Workspace Config

Skill: databricks-config

You can manage connections to multiple Databricks workspaces — dev, staging, production — from a single machine. The workspace config skill handles profile management, OAuth authentication, and live workspace switching so your AI coding assistant always targets the right environment. No more deploying to prod when you meant dev.

“Set up Databricks CLI authentication for my dev and prod workspaces using OAuth, then switch to the prod profile.”

Terminal window
# Authenticate to each workspace (interactive OAuth flow)
databricks auth login \
--host https://dbc-abc123.cloud.databricks.com \
-p dev
databricks auth login \
--host https://dbc-xyz789.cloud.databricks.com \
-p prod
# ~/.databrickscfg (generated by auth login)
[dev]
host = https://dbc-abc123.cloud.databricks.com
auth_type = databricks-cli
[prod]
host = https://dbc-xyz789.cloud.databricks.com
auth_type = databricks-cli
Terminal window
# Verify which workspace is active
databricks auth env -p dev
# Switch profile for subsequent commands
export DATABRICKS_CONFIG_PROFILE=prod
databricks clusters list # now hits prod

Key decisions:

  • OAuth via auth login — generates and auto-refreshes tokens. Preferred over PATs, which expire and get committed to repos by accident.
  • Named profiles — each workspace gets a short name (dev, prod) instead of a URL. Every CLI command accepts -p <profile> to target a specific workspace.
  • DATABRICKS_CONFIG_PROFILE env var — sets the default profile for the current shell session. Avoids passing -p on every command.
  • No manual .databrickscfg editingauth login writes the config file correctly. Hand-editing risks malformed entries that produce confusing “invalid host” errors.

“Which workspace am I connected to right now?”

Your AI coding assistant uses the manage_workspace MCP tool with action="status" to report the active host, profile, and authenticated user. This runs without any CLI commands and works inside any coding session.

Active workspace:
Host: https://dbc-abc123.cloud.databricks.com
Profile: dev
User: user@example.com

The switch is session-scoped — it resets when the MCP server restarts. For a permanent default, set the profile in your shell config.

“Show me all the Databricks workspaces I have set up.”

Your AI coding assistant calls manage_workspace with action="list" to display every profile in ~/.databrickscfg with its host URL and which one is currently active.

Profile Host Active
dev https://dbc-abc123.cloud.databricks.com *
prod https://dbc-xyz789.cloud.databricks.com
staging https://dbc-def456.cloud.databricks.com

Use this to verify your profiles before running deploys. If a workspace is missing, run databricks auth login --host <url> -p <name> to add it.

Configure cluster or serverless for a profile

Section titled “Configure cluster or serverless for a profile”

“Set up my dev profile to default to serverless compute.”

~/.databrickscfg
[dev]
host = https://dbc-abc123.cloud.databricks.com
auth_type = databricks-cli
serverless_compute_id = auto

Adding serverless_compute_id = auto makes databricks-connect and notebook execution default to serverless for that profile. For cluster-based workflows, use cluster_id = 0123-456789-abcdef12 instead. This avoids passing compute IDs on every command.

  • Expired OAuth tokens with no visible error — OAuth tokens auto-refresh, but if the refresh token itself expires (typically 90 days without use), CLI commands fail with vague “unauthorized” errors. Re-run databricks auth login to fix.
  • DATABRICKS_HOST overrides profile — if you have DATABRICKS_HOST set as an environment variable, it takes precedence over any profile you pass with -p. This causes commands to hit the wrong workspace. Unset it or use unset DATABRICKS_HOST before switching profiles.
  • Session-scoped MCP switches — the manage_workspace tool switches profiles for the current session only. Restarting your editor or MCP server resets to the default profile. For persistent defaults, export DATABRICKS_CONFIG_PROFILE in your shell rc file.