Skip to content

MCP Tools for App Lifecycle

Skill: databricks-app-python

You can go from local code to a running Databricks App without leaving your AI coding assistant. The MCP tools handle the full lifecycle — uploading source files to the workspace, creating or updating the app, triggering deployments, and pulling logs when something goes wrong. This is the conversational equivalent of the deploy workflow, but faster because you skip the UI entirely.

“Upload my app folder to the workspace, then create and deploy a Databricks App called customer-dashboard pointing at that folder.”

# Step 1: MCP Tool — upload_folder
upload_folder(
local_folder="/path/to/my_app",
workspace_folder="/Workspace/Users/user@example.com/my_app"
)
# Step 2: MCP Tool — create_or_update_app (creates if new, redeploys if exists)
result = create_or_update_app(
name="customer-dashboard",
description="Customer analytics dashboard",
source_code_path="/Workspace/Users/user@example.com/my_app"
)
# Returns: {"name": "customer-dashboard", "url": "...", "created": True, "deployment": {...}}

Key decisions:

  • create_or_update_app is idempotent — it creates the app on first call and redeploys on subsequent calls. No need to check if the app exists first.
  • Always upload files before deploying. The source_code_path must point to a workspace folder that already contains your code.
  • The app name becomes part of the URL, so keep it short, lowercase, and hyphenated.
  • The tool returns the live URL immediately, but the app takes 1-2 minutes to become healthy after deployment.

“Check if customer-dashboard is running and show me the logs.”

# MCP Tool — get_app (with logs)
app = get_app(name="customer-dashboard", include_logs=True)
# Returns: {"name": "...", "url": "...", "status": "RUNNING", "logs": "..."}

When a deployment fails, the logs field contains the stderr output from your app process — import errors, missing environment variables, port binding failures. This is the first thing to check before digging into the workspace UI.

“I updated my app locally. Redeploy customer-dashboard with the latest code.”

# Re-upload the updated source
upload_folder(
local_folder="/path/to/my_app",
workspace_folder="/Workspace/Users/user@example.com/my_app"
)
# Trigger redeployment (same call as creation)
result = create_or_update_app(
name="customer-dashboard",
description="Customer analytics dashboard",
source_code_path="/Workspace/Users/user@example.com/my_app"
)

The typical development loop is: edit locally, upload, redeploy, check logs. Your AI coding assistant can run all three steps in sequence when you say “redeploy my app.”

“Tear down the customer-dashboard app, I’m done testing.”

# MCP Tool — delete_app
delete_app(name="customer-dashboard")

This removes the app and its deployment. The workspace source files remain untouched — only the running app is deleted.

Your AI coding assistant generates apps following this layout. Keeping to this structure means upload_folder works without path adjustments:

my_app/
├── app.py # Main application entry point
├── models.py # Pydantic models / data classes
├── backend.py # Data access layer
├── requirements.txt # Additional dependencies (not pre-installed ones)
└── app.yaml # Databricks Apps configuration
  • Deploying before uploadingcreate_or_update_app does not upload files for you. If the source_code_path is empty or stale, the deployment either fails or runs old code.
  • Checking status too early — the app takes 1-2 minutes to start after deployment. Calling get_app immediately may show STARTING status. Wait a moment, then check again with include_logs=True.
  • Forgetting include_logs=True — without this flag, get_app returns metadata but no logs. When debugging a failed deployment, always pass include_logs=True to see the actual error output.