1

I couldn't find the exact same question in stack-overflow. Apologies, if this is a repeated question. I am using this code snippet to pass query to a table.

_body = {'_query':'SELECT * FROM `<projectId>.<datasetId>.<tableId>`', 'useLegacySql': False, 'maxResults': 100 } table = _bq.jobs().query(projectId= <projectId>,alt = "json", body = _body).execute() 

I am passing all the required arguments, since using jobs.query API, I am getting status code 200, with the results, but while integrating the snippet in python program, I get following error:

File "D:\Applications\Python27\lib\site-packages\oauth2client\_helpers.py", line 133, in positional_wrapper\n return wrapped(*args, **kwargs)\n', ' File "D:\Applications\Python27\lib\site-packages\googleapiclient\http.py", line 842, in execute\n raise HttpError(resp, content, uri=self.uri)\n', 'HttpError: https://www.googleapis.com/bigquery/v2/projects/projectId/queries?alt=json returned "Required parameter is missing">\n']

3 Answers 3

2

I believe issue not with bq parameters but with execute() parameters. It should have http parameter where you supply http context with user credentials

In appengine you can do something like this

SCOPE='https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/cloud-patform ' _http = AppAssertionCredentials(scope=SCOPE).authorize(httplib2.Http(timeout=600)) table = _bq.jobs().query(projectId= <projectId>,alt = "json", body = _body).execute(http=_http) 
Sign up to request clarification or add additional context in comments.

1 Comment

Could you please explain me what's happening with http
2

I am not sure about which library you are using in your example, but I would advise to use the Python BigQuery Client Libraries. You will find the complete reference (as well as plenty of examples) in its GitHub reference page.

More specifically, here you will find some examples on how to query data using the Python Client Library.

After installing the client library and setting up authentication (both steps are explained in the first link I shared), you will be able to execute a script such as the one I present below. In it, a public dataset is being queried, but feel free to modify the query to the one of your need.

from google.cloud import bigquery client = bigquery.Client() # Define the query query = "SELECT * FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 10" # Define the query job, by default uses Standard SQL query_job = client.query(query) results = query_job.result() # Waits for job to complete. for row in results: print("{}".format(row.title)) 

Also note that this Client Library uses Standard SQL by default (which is the preferred language to work with BigQuery), but you can always modify the job settings by adjusting the QueryJobConfig.

2 Comments

I do not want to use client library.
@SwatiSneha - so please look at my answer - and do not forget upvote AND accept answer if it works for you
1

You are not sending the required parameters indeed. Check this example for how to properly do so:

def sync_query(service, project_id, query, timeout=10000, num_retries=5): query_data = { 'query': query, 'timeoutMs': timeout, } return service.jobs().query( projectId=project_id, body=query_data).execute(num_retries=num_retries) 

1 Comment

table = _bq.jobs().query(projectId=project_id ,alt = "json", body = query_data).execute(num_entries=10)\n', ' File "D:\\Applications\\Python27\\lib\\site-packages\\oauth2client\_helpers.py", line 133, in positional_wrapper\n return wrapped(*args, **kwargs)\n', "TypeError: execute() got an unexpected keyword argument 'num_entries'

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.