0

I am currently working with the Delta Sharing API and have encountered an issue when using a Service Principal token for authentication. The API call returns the following error:

[CANNOT_INFER_EMPTY_SCHEMA] Can not infer schema from empty dataset.

However, when I use a personal access token, the API works as expected.

My questions are:

  1. Does the Delta Sharing API only support personal access tokens for authentication?
  2. Is there any additional configuration or permission required when using a Service Principal token for Delta Sharing?
  3. Has anyone else encountered this schema inference error, and how can it be resolved?

Thanks

5
  • Add more details on how you are using service principal for authentication. Commented Sep 26, 2024 at 5:53
  • Hi Jayashankar,We start by creating an Azure AD (AAD) token using the Service Principal credentials (client_id, client_secret, and tenant_id). This AAD token is used for authenticating API requests. Commented Sep 27, 2024 at 7:12
  • Hi Jayashankar, 1. Created an Azure AD (AAD) token using the Service Principal credentials . This AAD token is used for authenticating API requests. 2. using that AAD token I created a databricks PAT token valid for 3 months. The AAD token is passed in the Authorization header to authenticate the request to Databricks. 3. Once I have PAT, I used it in python code python headers = {"Authorization": f"Bearer {secret_token}"} response = requests.get(api_url, headers=headers) data = response.json() Let me know if you need any further details Commented Sep 27, 2024 at 7:19
  • did you follow this to generate token docs.databricks.com/en/dev-tools/auth/… Commented Sep 27, 2024 at 7:44
  • Yes, I follow exactly the same steps. Commented Sep 28, 2024 at 18:28

1 Answer 1

0

You first add your service principal in databricks settings like below.

enter image description here

Next, click on Add new.

enter image description here

Then select Microsoft Entra ID managed and give client id.

enter image description here

After adding now create a secret.

enter image description here

Using this secret generate new databricks PAT token for authorization.

Sign up to request clarification or add additional context in comments.

6 Comments

Getting the same error with the secrets: [CANNOT_INFER_EMPTY_SCHEMA] Can not infer schema from empty dataset.
Can you check the permissions once. Does your service principle have enough permission on schema and table.
we have "all privilages" permissions for SP on catalog, schema and table.
Can you add your code which is used to read this table.
Due to charter limitation splitting the code in 2 comments import requests import json from pyspark.sql import SparkSession from pyspark.sql.functions import col, from_unixtime, from_utc_timestamp spark = SparkSession.builder.appName("LoadSharesData").getOrCreate() tk = dbutils.secrets.get(scope="dev",key="kydev") api_url = "x.x.azuredatabricks.net/api/2.1/unity-catalog/shares" headers = {"Authorization": "Bearer {}".format(tk)} #headers = {"Authorization": f"Bearer xx"} response = requests.get(api_url, headers=headers) data = response.json()
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.