Manage datasets
This document describes how to copy datasets, recreate datasets in another location, secure datasets, delete datasets, restore tables from deleted datasets, and undelete datasets in BigQuery.
As a BigQuery administrator, you can organize and control access to tables and views that analysts use. For more information about datasets, see Introduction to datasets.
You cannot change the name of an existing dataset or relocate a dataset after it's created. As a workaround for changing the dataset name, you can copy a dataset and change the destination dataset's name. To relocate a dataset, you can follow one of the following methods:
Required roles
This section describes the roles and permissions that you need to manage datasets. If your source or destination dataset is in the same project as the one you are using to copy, then you don't need extra permissions or roles on that dataset.
To get the permissions that you need to manage datasets, ask your administrator to grant you the following IAM roles:
-
Copy a dataset (Beta):
-
BigQuery Admin (
roles/bigquery.admin
) on the destination project -
BigQuery Data Viewer (
roles/bigquery.dataViewer
) on the source dataset -
BigQuery Data Editor (
roles/bigquery.dataEditor
) on the destination dataset
-
BigQuery Admin (
-
Delete a dataset:
BigQuery Data Owner (
roles/bigquery.dataOwner
) on the project -
Restore a deleted dataset:
BigQuery User (
roles/bigquery.user
) on the project
For more information about granting roles, see Manage access to projects, folders, and organizations.
These predefined roles contain the permissions required to manage datasets. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to manage datasets:
-
Copy a dataset:
-
bigquery.transfers.update
on the destination project -
bigquery.jobs.create
on the destination project -
bigquery.datasets.get
on the source and destination dataset -
bigquery.tables.list
on the source and destination dataset -
bigquery.datasets.update
on the destination dataset -
bigquery.tables.create
on the destination dataset
-
-
Delete a dataset:
-
bigquery.datasets.delete
on the project -
bigquery.tables.delete
on the project
-
-
Restore a deleted dataset:
-
bigquery.datasets.create
on the project -
bigquery.datasets.get
on the dataset
-
You might also be able to get these permissions with custom roles or other predefined roles.
For more information about IAM roles and permissions in BigQuery, see Introduction to IAM.
Copy datasets
You can copy a dataset, including partitioned data within a region or across regions, without extracting, moving, or reloading data into BigQuery. BigQuery uses the BigQuery Data Transfer Service in the backend to copy datasets. For location considerations when you transfer data, see Data location and transfers.
For each dataset copy configuration, you can have one transfer run active at a time. Additional transfer runs are queued. If you are using the Google Cloud console, you can schedule recurring copies, and configure an email or Pub/Sub notifications with the BigQuery Data Transfer Service.
Limitations
The following limitations apply when you copy datasets:
You can't copy the following resources from a source dataset:
- Views.
- Routines, including UDFs.
- External tables.
- Change data capture (CDC) tables if the copy job is across regions. Copying CDC tables within the same region is supported.
Cross-region table copy job is not supported for tables encrypted with customer-managed encrypted keys (CMEK) when the destination dataset is not encrypted with CMEK and there is no CMEK provided. Copying tables with default encryption across regions is supported.
You can copy all encrypted tables within the same region, including tables encrypted with CMEK.
You can't use the following resources as destination datasets for copy jobs:
- Write-optimized storage.
Dataset encrypted with CMEK if the copy job is across regions and the source table is not encrypted with CMEK.
However, tables encrypted with CMEK are allowed as destination tables when copying within the same region.
The minimum frequency between copy jobs is 12 hours.
Appending data to a partitioned table in the destination dataset isn't supported.
If a table exists in the source dataset and the destination dataset, and the source table has not changed since the last successful copy, it's skipped. The source table is skipped even if the Overwrite destination tables checkbox is selected.
When truncating tables in the destination dataset, the dataset copy job doesn't detect any changes made to resources in the destination dataset before it begins the copy job. The dataset copy job overwrites all of the data in the destination dataset, including both the tables and schema.
The destination table might not reflect changes made to the source tables after a copy job starts.
Copying a dataset is not supported in BigQuery Omni regions.
To copy a dataset to a project in another VPC Service Controls service perimeter, you need to set the following egress rules:
In the destination project's VPC Service Controls service perimeter configuration, the IAM principal must have the following methods:
bigquery.datasets.get
bigquery.tables.list
bigquery.tables.get
,bigquery.tables.getData
In the source project's VPC Service Controls service perimeter configuration, the IAM principal being used must have the method set to
All Methods
.
Copy a dataset
Select one of the following options:
Console
Enable the BigQuery Data Transfer Service for your destination dataset.
Ensure that you have the required roles.
If you intend to set up transfer run notifications for Pub/Sub (Option 2 later in these steps), then you must have the
pubsub.topics.setIamPolicy
permission.If you only set up email notifications, then Pub/Sub permissions are not required. For more information, see the BigQuery Data Transfer Service run notifications.
Create a BigQuery dataset in the same region or a different region from your source dataset.
Option 1: Use the BigQuery copy function
To create a one-time transfer, use the BigQuery copy function:
Go to the BigQuery page.
In the Explorer panel, expand your project and select a dataset.
In the Dataset info section, click
Copy, and then do the following:In the Dataset field, either create a new dataset or select an existing dataset ID from the list.
Dataset names within a project must be unique. The project and dataset can be in different regions, but not all regions are supported for cross-region dataset copying.
In the Location field, the location of the source dataset is displayed.
Optional: To overwrite both the data and schema of the destination tables with the source tables, select the Overwrite destination tables checkbox. Both the source and destination tables must have the same partitioning schema.
To copy the dataset, click Copy.
Option 2: Use the BigQuery Data Transfer Service
To schedule recurring copies and configure email or Pub/Sub notifications, use the BigQuery Data Transfer Service in the Google Cloud console of the destination project:
Go to the Data transfers page.
Click Create a transfer.
In the Source list, select Dataset Copy.
In the Display name field, enter a name for your transfer run.
In the Schedule options section, do the following:
For Repeat frequency, choose an option for how often to run the transfer:
If you select Custom, enter a custom frequency—for example,
every day 00:00
. For more information, see Formatting the schedule.For Start date and run time, enter the date and time to start the transfer. If you choose Start now, this option is disabled.
In the Destination settings section, select a destination dataset to store your transfer data. You can also click CREATE NEW DATASET to create a new dataset before you select it for this transfer.
In the Data source details section, enter the following information:
- For Source dataset, enter the dataset ID that you want to copy.
- For Source project, enter the project ID of your source dataset.
To overwrite both the data and schema of the destination tables with the source tables, select the Overwrite destination tables checkbox. Both the source and destination tables must have the same partitioning schema.
In the Service Account menu, select a service account from the service accounts associated with your Google Cloud project. You can associate a service account with your transfer instead of using your user credentials. For more information about using service accounts with data transfers, see Use service accounts.
- If you signed in with a federated identity, then a service account is required to create a transfer. If you signed in with a Google Account, then a service account for the transfer is optional.
- The service account must have the required roles.
Optional: In the Notification options section, do the following:
- To enable email notifications, click the toggle. When you enable this option, the owner of the transfer configuration receives an email notification when a transfer run fails.
- To enable Pub/Sub notifications, click the toggle, and then either select a topic name from the list or click Create a topic. This option configures Pub/Sub run notifications for your transfer.
Click Save.
bq
Enable the BigQuery Data Transfer Service for your destination dataset.
Ensure that you have the required roles.
To create a BigQuery dataset, use the
bq mk
command with the dataset creation flag--dataset
and thelocation
flag:bq mk \ --dataset \ --location=LOCATION \ PROJECT:DATASET
Replace the following:
LOCATION
: the location where you want to copy the datasetPROJECT
: the project ID of your target datasetDATASET
: the name of the target dataset
To copy a dataset, use the
bq mk
command with the transfer creation flag--transfer_config
and the--data_source
flag. You must set the--data_source
flag tocross_region_copy
. For a complete list of valid values for the--data_source
flag, see the transfer-config flags in the bq command-line tool reference.bq mk \ --transfer_config \ --project_id=PROJECT \ --data_source=cross_region_copy \ --target_dataset=DATASET \ --display_name=NAME \ --service_account_name=SERCICE_ACCOUNT \ --params='PARAMETERS'
Replace the following:
NAME
: the display name for the copy job or the transfer configurationSERVICE_ACCOUNT
: the service account name used to authenticate your transfer. The service account should be owned by the sameproject_id
used to create the transfer and it should have all of the required permissions.PARAMETERS
: the parameters for the transfer configuration in the JSON formatParameters for a dataset copy configuration include the following:
source_dataset_id
: the ID of the source dataset that you want to copysource_project_id
: the ID of the project that your source dataset is inoverwrite_destination_table
: an optional flag that lets you truncate the tables of a previous copy and refresh all the data
Both the source and destination tables must have the same partitioning schema.
The following examples show the formatting of the parameters, based on your system's environment:
Linux: use single quotes to enclose the JSON string–for example:
'{"source_dataset_id":"mydataset","source_project_id":"mysourceproject","overwrite_destination_table":"true"}'
Windows command line: use double quotes to enclose the JSON string, and escape double quotes in the string with a backslash–for example:
"{\"source_dataset_id\":\"mydataset\",\"source_project_id\":\"mysourceproject\",\"overwrite_destination_table\":\"true\"}"
PowerShell: use single quotes to enclose the JSON string, and escape double quotes in the string with a backslash–for example:
'{\"source_dataset_id\":\"mydataset\",\"source_project_id\":\"mysourceproject\",\"overwrite_destination_table\":\"true\"}'
For example, the following command creates a dataset copy configuration that's named
My Transfer
with a target dataset that's namedmydataset
and a project with the ID ofmyproject
.bq mk \ --transfer_config \ --project_id=myproject \ --data_source=cross_region_copy \ --target_dataset=mydataset \ --display_name='My Transfer' \ --params='{ "source_dataset_id":"123_demo_eu", "source_project_id":"mysourceproject", "overwrite_destination_table":"true" }'
API
Enable the BigQuery Data Transfer Service for your destination dataset.
Ensure that you have the required roles.
To create a BigQuery dataset, call the
datasets.insert
method with a defined dataset resource.To copy a dataset, use the
projects.locations.transferConfigs.create
method and supply an instance of theTransferConfig
resource.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Python
Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
To avoid additional storage costs, consider deleting the prior dataset.
View dataset copy jobs
To see the status and view details of a dataset copy job in the Google Cloud console, do the following:
In the Google Cloud console, go to the Data transfers page.
Select a transfer for which you want to view the transfer details, and then do the following:
On the Transfer details page, select a transfer run.
To refresh, click
Refresh.
Recreate datasets in another location
To manually move a dataset from one location to another, follow these steps:
-
Export the data from your BigQuery tables to a Cloud Storage bucket in either the same location as your dataset or in a location contained within your dataset's location. For example, if your dataset is in the
EU
multi-region location, you could export your data to theeurope-west1
Belgium location, which is part of the EU.There are no charges for exporting data from BigQuery, but you do incur charges for storing the exported data in Cloud Storage. BigQuery exports are subject to the limits on export jobs.
-
Copy or move the data from your export Cloud Storage bucket to a new bucket you created in the destination location. For example, if you are moving your data from the
US
multi-region to theasia-northeast1
Tokyo region, you would transfer the data to a bucket that you created in Tokyo. For information about transferring Cloud Storage objects, see Copy, rename, and move objects in the Cloud Storage documentation.Transferring data between regions incurs network egress charges in Cloud Storage.
-
Create a new BigQuery dataset in the new location, and then load your data from the Cloud Storage bucket into the new dataset.
You are not charged for loading the data into BigQuery, but you will incur charges for storing the data in Cloud Storage until you delete the data or the bucket. You are also charged for storing the data in BigQuery after it is loaded. Loading data into BigQuery is subject to the load jobs limits.
You can also use Cloud Composer to move and copy large datasets programmatically.
For more information about using Cloud Storage to store and move large datasets, see Use Cloud Storage with big data.
Secure datasets
To control access to datasets in BigQuery, see Controlling access to datasets. For information about data encryption, see Encryption at rest.
Delete datasets
When you delete a dataset by using the Google Cloud console, tables and views
in the dataset, including their data, are deleted. When you delete a
dataset by using the bq command-line tool, you must use the -r
flag to delete the
tables and views.
To delete a dataset, select one of the following options:
Console
Go to the BigQuery page.
In the Explorer pane, expand your project and select a dataset.
Expand the
Actions option and click Delete.In the Delete dataset dialog, type
delete
into the field, and then click Delete.
SQL
To delete a dataset, use the
DROP SCHEMA
DDL statement.
The following example deletes a dataset named mydataset
:
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
DROP SCHEMA IF EXISTS mydataset;
By default, this only works to delete an empty dataset. To delete a dataset and all of its contents, use the
CASCADE
keyword:DROP SCHEMA IF EXISTS mydataset CASCADE;
Click
Run.
For more information about how to run queries, see Run an interactive query.
bq
Use the bq rm
command
with the --dataset
or -d
flag, which is optional.
If your dataset contains tables, you must use the -r
flag to
remove all tables in the dataset. If you use the -r
flag, then you can omit
the --dataset
or -d
flag.
After you run the command, the system asks for confirmation. You can use the
-f
flag to skip the confirmation.
If you are deleting a table in a project other than your default project,
add the project ID to the dataset name in the following format:
PROJECT_ID:DATASET
.
bq rm -r -f -d PROJECT_ID:DATASET
Replace the following:
PROJECT_ID
: your project IDDATASET
: the name of the dataset that you're deleting
Examples:
Enter the following command to remove a dataset that's named mydataset
and all
the tables in it from your default project. The command uses the
-d
flag.
bq rm -r -d mydataset
When prompted, type y
and press enter.
Enter the following command to remove mydataset
and all the tables in it
from myotherproject
. The command does not use the optional -d
flag.
The -f
flag is used to skip confirmation.
bq rm -r -f myotherproject:mydataset
You can use the bq ls
command to confirm that the dataset was deleted.
API
Call the
datasets.delete
method
to delete the dataset and set the deleteContents
parameter to true
to
delete the tables in it.
C#
The following code sample deletes an empty dataset.
Before trying this sample, follow the C# setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery C# API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
The following code sample deletes a dataset and all of its contents:
Go
Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
Java
The following code sample deletes an empty dataset.
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
The following code sample deletes a dataset and all of its contents:
Node.js
Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
PHP
Before trying this sample, follow the PHP setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery PHP API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
Python
Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
Ruby
The following code sample deletes an empty dataset.
Before trying this sample, follow the Ruby setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Ruby API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Install the Python client for the BigQuery Data Transfer API withpip install google-cloud-bigquery-datatransfer
. Then create a transfer configuration to copy the dataset.
The following code sample deletes a dataset and all of its contents:
Restore tables from deleted datasets
You can restore tables from a deleted dataset that are within the dataset's time travel window. To restore the entire dataset, see undelete datasets.
- Create a dataset with the same name and in the same location as the original.
- Choose a timestamp from before the original dataset was deleted by using a
format of milliseconds since the epoch–for example,
1418864998000
. Copy the
originaldataset.table1
table at the time1418864998000
into the new dataset:bq cp originaldataset.table1@1418864998000 mydataset.mytable
To find the names of the nonempty tables that were in the deleted dataset, query the dataset's
INFORMATION_SCHEMA.TABLE_STORAGE
view within the time travel window.
Undelete datasets
You can undelete a dataset to recover it to the state that it was in when it was deleted. You can only undelete datasets that are within your time travel window. This recovery includes all of the objects that were contained in the dataset, the dataset properties, and the security settings. For resources that are not recovered, see Undelete dataset limitations.
Undelete dataset limitations
- Restored datasets might reference security principals that no longer exist.
- References to a deleted dataset in linked datasets are not restored by undelete. Subscribers must subscribe again to manually restore the links.
- Business tags are not restored by undelete.
- You must manually refresh materialized views and reauthorize authorized views, authorized datasets, and authorized routines. Note that, when authorized resources (views, datasets, and routines) are deleted, it takes up to 24 hours for the authorization to delete. So, if you undelete a dataset with an authorized resource less than 24 hours after deletion, it's possible that reauthorization isn't necessary. As a best practice, always verify authorization after undeleting resources.
Undelete a dataset
To undelete a dataset, select one of the following options:
SQL
Use the
UNDROP SCHEMA
data definition language (DDL) statement:
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
UNDROP SCHEMA
DATASET_ID
;Replace
DATASET_ID
with the dataset that you want to undelete.Specify the location of the dataset that you want to undelete.
Click
Run.
For more information about how to run queries, see Running interactive queries.
API
Call the
datasets.undelete
method.
When you undelete a dataset, the following errors might occur:
ALREADY_EXISTS
: a dataset with the same name already exists in the region that you tried to undelete in. You can't use undelete to overwrite or merge datasets.NOT_FOUND
: the dataset you're trying to recover is past its time travel window, it never existed, or you didn't specify the correct location of the dataset.ACCESS_DENIED
: you don't have the correct permissions to undelete this dataset.
Quotas
For information about copy quotas, see Copy jobs.
Usage for copy jobs are available in the INFORMATION_SCHEMA
. To learn how to
query the INFORMATION_SCHEMA.JOBS
view, see
Get usage of copy jobs.
Pricing
For pricing information for copying datasets, see Data replication pricing.
BigQuery sends compressed data for copying across regions so the data that is billed might be less than the actual size of your dataset. For more information, see BigQuery pricing.
What's next
- Learn how to create datasets.
- Learn how to update datasets.