Supported Frameworks
Skill: databricks-app-python
What You Can Build
Section titled “What You Can Build”Every framework below is pre-installed in the Databricks Apps runtime, so you only add extra packages to requirements.txt. This page covers the Databricks-specific patterns you need — auth header extraction, port binding, caching strategy — not general framework usage. For full recipes, see the Databricks Apps Cookbook.
In Action
Section titled “In Action”“Using Python and Streamlit, build a Databricks App that queries a SQL warehouse with connection caching.”
import osimport streamlit as stfrom databricks.sdk.core import Configfrom databricks import sql
st.set_page_config(page_title="Sales Dashboard", layout="wide")
@st.cache_resource(ttl=300)def get_connection(): cfg = Config() return sql.connect( server_hostname=cfg.host, http_path=f"/sql/1.0/warehouses/{os.getenv('DATABRICKS_WAREHOUSE_ID')}", credentials_provider=lambda: cfg.authenticate, )
conn = get_connection()Key decisions:
st.set_page_config()must be the first Streamlit command — placing it after any otherst.*call raises an error@st.cache_resourceprevents creating a new SQL connection on every rerun, which would exhaust the connection pool within minutes- Use
@st.cache_data(ttl=...)for query results,@st.cache_resourcefor connections and models - The
app.yamlcommand for Streamlit is["streamlit", "run", "app.py"]
More Patterns
Section titled “More Patterns”Production dashboard with Dash
Section titled “Production dashboard with Dash”“Using Python and Dash, scaffold a dashboard with Bootstrap styling and auth header access.”
import osimport dashimport dash_bootstrap_components as dbcfrom flask import request
app = dash.Dash( __name__, external_stylesheets=[dbc.themes.BOOTSTRAP, dbc.icons.FONT_AWESOME], title="Revenue Dashboard",)
# Access user token inside a callback:# user_token = request.headers.get("x-forwarded-access-token")
app.run(port=int(os.environ.get("DATABRICKS_APP_PORT", 8000)))Always use dash-bootstrap-components for layout. Dash runs Flask under the hood, so you access the user token via request.headers. The app.yaml command is ["python", "app.py"].
ML model demo with Gradio
Section titled “ML model demo with Gradio”“Using Python and Gradio, build a model inference UI that forwards the user’s token to a serving endpoint.”
import osimport gradio as grfrom databricks.sdk.core import Config
cfg = Config()
def predict(message, request: gr.Request): user_token = request.headers.get("x-forwarded-access-token") # Call model serving endpoint with user_token or cfg.authenticate() return f"Response to: {message}"
demo = gr.Interface(fn=predict, inputs="text", outputs="text")port = int(os.environ.get("DATABRICKS_APP_PORT", 8000))demo.launch(server_name="0.0.0.0", server_port=port)Gradio is a natural fit for model serving demos and chat interfaces. The gr.Request parameter gives you access to the auth header. The app.yaml command is ["python", "app.py"].
REST API with FastAPI
Section titled “REST API with FastAPI”“Using Python and FastAPI, create an API endpoint that queries with the user’s identity.”
from fastapi import FastAPI, Requestfrom databricks.sdk.core import Configfrom databricks import sql
app = FastAPI(title="Data API")cfg = Config()
@app.get("/api/data")async def get_data(request: Request): user_token = request.headers.get("x-forwarded-access-token") conn = sql.connect( server_hostname=cfg.host, http_path="/sql/1.0/warehouses/<id>", access_token=user_token, ) with conn.cursor() as cursor: cursor.execute("SELECT * FROM catalog.schema.table LIMIT 10") return cursor.fetchall()FastAPI auto-generates OpenAPI docs at /docs. The Databricks SQL connector is synchronous, so use asyncio.to_thread() if you need non-blocking execution. The app.yaml command is ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"].
Webhook receiver with Flask
Section titled “Webhook receiver with Flask”“Using Python and Flask, create a webhook endpoint deployed with Gunicorn.”
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route("/api/webhook", methods=["POST"])def handle_webhook(): payload = request.get_json() user_token = request.headers.get("x-forwarded-access-token") # Process webhook payload... return jsonify({"status": "received"})Deploy Flask with Gunicorn — never use the built-in dev server in production. The app.yaml command is ["gunicorn", "app:app", "-w", "4", "-b", "0.0.0.0:8000"].
Full-stack Python app with Reflex
Section titled “Full-stack Python app with Reflex”“Using Python and Reflex, build a reactive UI that loads data from a SQL warehouse.”
import reflex as rxfrom databricks.sdk.core import Config
cfg = Config()
class State(rx.State): data: list[dict] = []
def load_data(self): from databricks import sql conn = sql.connect( server_hostname=cfg.host, http_path="/sql/1.0/warehouses/<id>", credentials_provider=lambda: cfg.authenticate, ) with conn.cursor() as cursor: cursor.execute("SELECT * FROM catalog.schema.table LIMIT 10") self.data = [ dict(zip([d[0] for d in cursor.description], row)) for row in cursor.fetchall() ]Reflex gives you a reactive frontend without writing JavaScript. The app.yaml command is ["reflex", "run", "--env", "prod"].
Watch Out For
Section titled “Watch Out For”- Binding to the wrong port — all frameworks must bind to the
DATABRICKS_APP_PORTenvironment variable (defaults to 8000). Never use 8080. Streamlit is auto-configured by the runtime; all others need explicit port binding in code or theapp.yamlcommand. - Using Flask’s dev server in production — Flask’s
app.run()is single-threaded and not suitable for production traffic. Always deploy behind Gunicorn with multiple workers. - Forgetting
@st.cache_resourcefor connections — Streamlit reruns the entire script on every interaction. Without caching, you create and discard database connections on every click, leading to connection exhaustion. - Adding pre-installed frameworks to
requirements.txt— Dash, Streamlit, Gradio, Flask, and FastAPI are already in the runtime. Listing them can cause version conflicts. Only add packages that are not pre-installed.