The 'lakeflow_pipelines_sql' project was generated by using the lakeflow-pipelines template.
src/: SQL source code for this project.resources/: Resource configurations (jobs, pipelines, etc.)
Choose how you want to work on this project:
(a) Directly in your Databricks workspace, see https://docs.databricks.com/dev-tools/bundles/workspace.
(b) Locally with an IDE like Cursor or VS Code, see https://docs.databricks.com/dev-tools/vscode-ext.html.
(c) With command line tools, see https://docs.databricks.com/dev-tools/cli/databricks-cli.html
The Databricks workspace and IDE extensions provide a graphical interface for working with this project. It's also possible to interact with it directly using the CLI:
-
Authenticate to your Databricks workspace, if you have not done so already:
$ databricks configure -
To deploy a development copy of this project, type:
$ databricks bundle deploy --target dev(Note that "dev" is the default target, so the
--targetparameter is optional here.)This deploys everything that's defined for this project. For example, the default template would deploy a pipeline called
[dev yourname] lakeflow_pipelines_sql_etlto your workspace. You can find that resource by opening your workpace and clicking on Jobs & Pipelines. -
Similarly, to deploy a production copy, type:
$ databricks bundle deploy --target prodNote the default template has a includes a job that runs the pipeline every day (defined in resources/sample_job.job.yml). The schedule is paused when deploying in development mode (see https://docs.databricks.com/dev-tools/bundles/deployment-modes.html).
-
To run a job or pipeline, use the "run" command:
$ databricks bundle run