如何使用 V3 API Python 客户端按 ID 列出 Google 驱动器文件夹的内容?
How to list the contents of a Google Drive folder by ID with the V3 API Python client?
官方文档here is not helping much on this topic. First of all it is an example on searching by mimeType, not by parent ID. And then I replaced the search term by the example in here,它仍然没有找到任何东西。我的代码如下。请注意,该函数被命名为 copy_folder
只是因为我实际上试图将一个文件夹中的所有文件和子文件夹复制到一个新文件夹中,而第一步是获取源文件夹的内容。源文件夹位于团队驱动器中。响应中的 'files' 键只是空的,而在我测试的文件夹中实际上有文件和子文件夹。
def copy_folder(service, src, dst):
"""
Copies a folder. It will create a new folder in dst with the same name as src,
and copies the contents of src into the new folder
src: Source folder's id
dst: Destination folder's id that the source folder is going to be copied to
"""
page_token = None
while True:
response = service.files().list(q="'%s' in parents" % src,
supportsAllDrives=True,
spaces='drive',
fields='nextPageToken, files(id, name)',
pageToken=page_token,
).execute()
for file in response.get('files', []):
# Process change
print('Found file: %s (%s)' % (file.get('name'), file.get('id')))
page_token = response.get('nextPageToken', None)
if page_token is None:
break
既然如此,我认为为了检索子文件夹中的文件,需要使用递归函数。为此,我创建了实现它的库。所以在这个答案中,我想使用 getfilelistpy 的库来实现你的目标。使用这个库时,脚本如下
示例脚本:
在使用此脚本之前,请按如下方式安装getfilelistpy。
$ pip install getfilelistpy
并且,在您复制并粘贴以下脚本后,请设置 src
的变量。
import pickle
import os.path
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from getfilelistpy import getfilelist
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
creds = None
creFile = 'token_sample.pickle'
if os.path.exists(creFile):
with open(creFile, 'rb') as token:
creds = pickle.load(token)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'client_secret.json', SCOPES)
creds = flow.run_local_server()
with open(creFile, 'wb') as token:
pickle.dump(creds, token)
src = '###' # Please set the folder ID.
fields = 'nextPageToken, files(id, name)'
resource = {
"oauth2": creds,
"id": src,
"fields": fields,
}
res = getfilelist.GetFileList(resource)
print(dict(res))
- 在这种情况下,您还可以使用
https://www.googleapis.com/auth/drive.readonly
和 https://www.googleapis.com/auth/drive
的范围。
注:
当你想获取特定文件夹下的文件列表时,也可以使用下面的脚本。
src = '###' # Please set the folder ID.
service = build('drive', 'v3', credentials=creds)
fields = 'nextPageToken, files(id, name)'
q = "'%s' in parents" % src
values = []
nextPageToken = ""
while True:
res = service.files().list(q=q, fields=fields, pageSize=1000, pageToken=nextPageToken or "", includeItemsFromAllDrives=True, supportsAllDrives=True).execute()
values.extend(res.get("files"))
nextPageToken = res.get("nextPageToken")
if nextPageToken is None:
break
print(values)
在这种情况下,要从共享云端硬盘中检索文件列表,请使用 supportsAllDrives=True
和 includeItemsFromAllDrives=True
。
如果无法找回文件,请同时添加corpora="drive", driveId="###driveId###"
.
参考文献:
官方文档here is not helping much on this topic. First of all it is an example on searching by mimeType, not by parent ID. And then I replaced the search term by the example in here,它仍然没有找到任何东西。我的代码如下。请注意,该函数被命名为 copy_folder
只是因为我实际上试图将一个文件夹中的所有文件和子文件夹复制到一个新文件夹中,而第一步是获取源文件夹的内容。源文件夹位于团队驱动器中。响应中的 'files' 键只是空的,而在我测试的文件夹中实际上有文件和子文件夹。
def copy_folder(service, src, dst):
"""
Copies a folder. It will create a new folder in dst with the same name as src,
and copies the contents of src into the new folder
src: Source folder's id
dst: Destination folder's id that the source folder is going to be copied to
"""
page_token = None
while True:
response = service.files().list(q="'%s' in parents" % src,
supportsAllDrives=True,
spaces='drive',
fields='nextPageToken, files(id, name)',
pageToken=page_token,
).execute()
for file in response.get('files', []):
# Process change
print('Found file: %s (%s)' % (file.get('name'), file.get('id')))
page_token = response.get('nextPageToken', None)
if page_token is None:
break
既然如此,我认为为了检索子文件夹中的文件,需要使用递归函数。为此,我创建了实现它的库。所以在这个答案中,我想使用 getfilelistpy 的库来实现你的目标。使用这个库时,脚本如下
示例脚本:
在使用此脚本之前,请按如下方式安装getfilelistpy。
$ pip install getfilelistpy
并且,在您复制并粘贴以下脚本后,请设置 src
的变量。
import pickle
import os.path
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from getfilelistpy import getfilelist
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
creds = None
creFile = 'token_sample.pickle'
if os.path.exists(creFile):
with open(creFile, 'rb') as token:
creds = pickle.load(token)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'client_secret.json', SCOPES)
creds = flow.run_local_server()
with open(creFile, 'wb') as token:
pickle.dump(creds, token)
src = '###' # Please set the folder ID.
fields = 'nextPageToken, files(id, name)'
resource = {
"oauth2": creds,
"id": src,
"fields": fields,
}
res = getfilelist.GetFileList(resource)
print(dict(res))
- 在这种情况下,您还可以使用
https://www.googleapis.com/auth/drive.readonly
和https://www.googleapis.com/auth/drive
的范围。
注:
当你想获取特定文件夹下的文件列表时,也可以使用下面的脚本。
src = '###' # Please set the folder ID. service = build('drive', 'v3', credentials=creds) fields = 'nextPageToken, files(id, name)' q = "'%s' in parents" % src values = [] nextPageToken = "" while True: res = service.files().list(q=q, fields=fields, pageSize=1000, pageToken=nextPageToken or "", includeItemsFromAllDrives=True, supportsAllDrives=True).execute() values.extend(res.get("files")) nextPageToken = res.get("nextPageToken") if nextPageToken is None: break print(values)
在这种情况下,要从共享云端硬盘中检索文件列表,请使用
supportsAllDrives=True
和includeItemsFromAllDrives=True
。如果无法找回文件,请同时添加
corpora="drive", driveId="###driveId###"
.