Project: databricks-connect

Databricks Connect Client

Project Details

Latest version
14.2.0
Home Page
PyPI Page
https://pypi.org/project/databricks-connect/

Project Popularity

PageRank
0.0067573287213843255
Number of downloads
942958

Databricks Connect

Databricks Connect is a Python library to run PySpark DataFrame queries on a remote Spark cluster. Databricks Connect leverages the power of Spark Connect. An application using Databricks Connect runs locally, and when the results of a DataFrame query need to be evaluated, the query is run on a configured Databricks cluster.

The following is a simple Python code that uses Databricks Connect and prints out a number range. The number range query is executed on the Databricks cluster.

from databricks.connect import DatabricksSession

session = DatabricksSession.builder.getOrCreate()

df = session.range(1, 10)
df.show()

Specifying Connection Parameters

DatabricksSession offers a few ways to specify the Databricks workspace, cluster and user credentials, collectively referred to in the rest of this document as connection parameters. The specified credentials are used to execute the DataFrame queries on the cluster. This user must have cluster access permissions and appropriate data access permissions.

NOTE: Currently, Databricks Connect only supports credentials based on Personal Access Token. Other authentication mechanisms are coming soon.

When DatabricksSession is initialized with no additional parameters as below, connection parameters are picked up from the environment.

session = DatabricksSession.builder.getOrCreate()

First, the SPARK_REMOTE environment variable is used if it's configured.

If configured, the SPARK_REMOTE environment variable must contain the spark connect connection string. Read more about spark connect connection string.

SPARK_REMOTE="sc://<databricks workspace url>:443/;token=<bearer token>;x-databricks-cluster-id=<cluster id>"

If this environment variable is not configured, Databricks Connect will now look for connection parameters using the Databricks SDK.

The Databricks Python SDK reads these values from two locations - first from environment variables that may be configured. For parameters not configured via environment variables, the 'DEFAULT' profile, if set up, from the configuration file .databrickscfg. Databricks Python SDK facilitates OAuth token refreshing and enables Service Principal client credentials support on AWS and Azure. The details on the authentication process, environment variables, and other configuration options can be found in the Databricks SDK.

Similar to the authentication environment variables, the Databricks SDK reads the cluster identifier from the environment variable DATABRICKS_CLUSTER_ID or from the cluster_id entry in the config file.

In case specific profile of config file needs to be used it can be achieved as follows:

from databricks.connect import DatabricksSession

session = DatabricksSession.builder.profile("profile_name").getOrCreate()

Connection parameters can also be specified directly in code.

session = DatabricksSession.builder.remote(
    host="<databricks workspace url>",
    cluster_id="<databricks cluster id>",
    token="<bearer token>"
).getOrCreate()

Alternatively, connection can be initialized based on Config object from Databricks SDK

from databricks.sdk.core import Config

config = Config(...)
DatabricksSession.builder.sdkConfig(config).getOrCreate()

The spark connect connection string can also be specified directly in code.

session = DatabricksSession.builder\
    .remote("sc://<databricks workspace url>:443/;token=<bearer token>;x-databricks-cluster-id=<cluster id>")\
    .getOrCreate()

In summary, connection parameters are collected in the following order. When all connection parameters are available, evaluation is stopped.

  1. Specified directly using remote(), either as a connection string or as keyword arguments.
  2. Specified via the Databricks SDK using sdkConfig() or using profile.
  3. Specified in the SPARK_REMOTE environment variable.
  4. Specified via the Databricks SDK's default authentication.

Debugging

Databricks connect can generate debug logs in case they are needed for inspection.

Debug logs can be enabled by setting environment variable SPARK_CONNECT_LOG_LEVEL=debug, i.e:

$ SPARK_CONNECT_LOG_LEVEL=debug python3 myprogram.py
2023-07-24 14:40:28,505 50147 DEBUG Enabled debug logs for databricks-connect
2023-07-24 14:40:28,505 50147 DEBUG IPython module is present.
2023-07-24 14:40:28,505 50147 DEBUG Falling back to default configuration from the SDK.
2023-07-24 14:40:28,505 50147 DEBUG Loaded from environment
2023-07-24 14:40:28,505 50147 DEBUG Attempting to configure auth: pat
...

OAuth

The Databricks Connect module, via the Databricks SDK, supports OAuth authentication mechanism. This can be configured via configuration profiles in the .databrickscfg file. See [TBD: link here] on how to set up and use configuration profiles.

The following configuration profile snippet sets up OAuth integration via the Azure CLI, and should be added to the .databrickscfg file.

[azure-cli]
host = https://adb-XXX.azuredatabricks.net
auth_type = azure-cli
cluster_id = <databricks cluster id>

Similarly, the following snippet sets up OAuth integration via Azure Active Directory (AAD) service principal.

[azure-aad]
host = https://adb-XXX.azuredatabricks.net
azure_tenant_id = 00000000-0000-0000-0000-000000000001
azure_client_id = 00000000-0000-0000-0000-000000000002
azure_client_secret = s0M3p@$$wrd
cluster_id = YYY

Custom Headers

Databricks Session supports setting custom headers (in case your remote needs it). You can do it as follows:

DatabricksSession.builder.header('x-custom-header', 'value').getOrCreate()

This can be combined with other session configurations.