如何将 tar.gz 文件直接从 URL 读取到 Pandas?
How can I read a tar.gz file directly from a URL into Pandas?
我希望阅读的数据集作为 tar.gz 文件保存在 GitHub 上,每隔几个小时更新一次。虽然我总是可以下载这个文件,解压缩它,并从 CSV 中读取,但如果我能及时直接从 this URL 读取到 Pandas 数据帧,那就更好了。
经过一些谷歌搜索,我能够下载压缩文件,然后将其作为数据框读取。
import requests
import tarfile
import pandas as pd
# Download file from GitHub
url = "https://github.com/beoutbreakprepared/nCoV2019/blob/master/latest_data/latestdata.tar.gz?raw=true"
target_path = "latestdata.tar.gz"
response = requests.get(url, stream=True)
if response.status_code == 200:
with open(target_path, "wb") as f:
f.write(response.raw.read())
# Read from downloaded file
with tarfile.open(target_path, "r:*") as tar:
csv_path = tar.getnames()[0]
df = pd.read_csv(tar.extractfile(csv_path), header=0, sep=",")
但是,我想知道是否有一种方法可以直接将文件内容读取到数据框中而无需先将其保存在本地。如果我想稍后构建 Web 应用程序并且没有本地计算机,这可能很有用。任何帮助,将不胜感激!谢谢!
您可以使用 BytesIO
(In-Memory Stream) 将数据保存在内存中,而不是将文件保存到本地机器。
另外,根据 tarfile.open documentation,如果指定了 fileobj
,它将用作以二进制模式打开的文件对象的替代名称。
>>> import tarfile
>>> from io import BytesIO
>>>
>>> import requests
>>> import pandas as pd
>>> url = "https://github.com/beoutbreakprepared/nCoV2019/blob/master/latest_data/latestdata.tar.gz?raw=true"
>>> response = requests.get(url, stream=True)
>>> with tarfile.open(fileobj=BytesIO(response.raw.read()), mode="r:gz") as tar_file:
... for member in tar_file.getmembers():
... f = tar_file.extractfile(member)
... df = pd.read_csv(f)
... print(df)
如果你使用ParData,这可以做得很干净:
from tempfile import TemporaryDirectory
import pardata
schema = {
'download_url': 'https://github.com/beoutbreakprepared/nCoV2019/blob/master/latest_data/latestdata.tar.gz?raw=true',
'subdatasets': {
'all': {
'path': 'latestdata.csv',
'format': {
'id': 'table/csv'
}
}
}
}
with TemporaryDirectory() as d:
dataset = pardata.dataset.Dataset(schema=schema, data_dir=d)
dataset.download(verify_checksum=False)
my_csv = dataset.load() # my_csv is a pandas.DataFrame object that stores the CSV file
print(my_csv)
免责声明:我是 ParData 的主要共同维护者。
我希望阅读的数据集作为 tar.gz 文件保存在 GitHub 上,每隔几个小时更新一次。虽然我总是可以下载这个文件,解压缩它,并从 CSV 中读取,但如果我能及时直接从 this URL 读取到 Pandas 数据帧,那就更好了。
经过一些谷歌搜索,我能够下载压缩文件,然后将其作为数据框读取。
import requests
import tarfile
import pandas as pd
# Download file from GitHub
url = "https://github.com/beoutbreakprepared/nCoV2019/blob/master/latest_data/latestdata.tar.gz?raw=true"
target_path = "latestdata.tar.gz"
response = requests.get(url, stream=True)
if response.status_code == 200:
with open(target_path, "wb") as f:
f.write(response.raw.read())
# Read from downloaded file
with tarfile.open(target_path, "r:*") as tar:
csv_path = tar.getnames()[0]
df = pd.read_csv(tar.extractfile(csv_path), header=0, sep=",")
但是,我想知道是否有一种方法可以直接将文件内容读取到数据框中而无需先将其保存在本地。如果我想稍后构建 Web 应用程序并且没有本地计算机,这可能很有用。任何帮助,将不胜感激!谢谢!
您可以使用 BytesIO
(In-Memory Stream) 将数据保存在内存中,而不是将文件保存到本地机器。
另外,根据 tarfile.open documentation,如果指定了 fileobj
,它将用作以二进制模式打开的文件对象的替代名称。
>>> import tarfile
>>> from io import BytesIO
>>>
>>> import requests
>>> import pandas as pd
>>> url = "https://github.com/beoutbreakprepared/nCoV2019/blob/master/latest_data/latestdata.tar.gz?raw=true"
>>> response = requests.get(url, stream=True)
>>> with tarfile.open(fileobj=BytesIO(response.raw.read()), mode="r:gz") as tar_file:
... for member in tar_file.getmembers():
... f = tar_file.extractfile(member)
... df = pd.read_csv(f)
... print(df)
如果你使用ParData,这可以做得很干净:
from tempfile import TemporaryDirectory
import pardata
schema = {
'download_url': 'https://github.com/beoutbreakprepared/nCoV2019/blob/master/latest_data/latestdata.tar.gz?raw=true',
'subdatasets': {
'all': {
'path': 'latestdata.csv',
'format': {
'id': 'table/csv'
}
}
}
}
with TemporaryDirectory() as d:
dataset = pardata.dataset.Dataset(schema=schema, data_dir=d)
dataset.download(verify_checksum=False)
my_csv = dataset.load() # my_csv is a pandas.DataFrame object that stores the CSV file
print(my_csv)
免责声明:我是 ParData 的主要共同维护者。