获取 GitHub 存储库中文件的所有提交
Get all commits of a file in a GitHub repo
如果我使用下面的url,
https://api.github.com/repos/raspberrypi/linux/commits?path=drivers/gpu/drm/i915/intel_display.c
我获得了前 30 次提交。但是那个文件有大约 3000 次提交。我怎样才能得到这个文件的所有提交?
您可以查询与给定文件关联的所有提交:
var stats = new Dictionary<string, int>();
foreach (var path in allPaths)
{
var request = new CommitRequest { Path = "path/to/file.cs" };
var commitsForFile = await client.Repository.Commit.GetAll(Owner, Name, request);
stats.Add(path, commitsForFile.Length);
}
此代码使用由GitHub官方维护和支持的Octokit。所有 Octokit 库均在 MIT 许可证下发布,这意味着它们可以在任何项目中修改和使用。
在查看 Pagination on the GitHub API documentation. But, you have to be careful about the API request rate as discussed here 之后,我最终做了类似以下代码的事情。
import requests
import json
url = 'https://api.github.com/repos/raspberrypi/linux/commits?access_token=<your github user OAuth token>&path=drivers/gpu/drm/i915/intel_display.c'
commits = []
i = 1
r = requests.get(url + '&per_page=100&page=' + str(i))
while len(r.json()) != 0:
commits.extend(r.json())
i += 1
r = requests.get(url + '&per_page=100&page=' + str(i))
with open('commits.txt', 'w') as outfile:
json.dump(commits, outfile)
如果我使用下面的url,
https://api.github.com/repos/raspberrypi/linux/commits?path=drivers/gpu/drm/i915/intel_display.c
我获得了前 30 次提交。但是那个文件有大约 3000 次提交。我怎样才能得到这个文件的所有提交?
您可以查询与给定文件关联的所有提交:
var stats = new Dictionary<string, int>();
foreach (var path in allPaths)
{
var request = new CommitRequest { Path = "path/to/file.cs" };
var commitsForFile = await client.Repository.Commit.GetAll(Owner, Name, request);
stats.Add(path, commitsForFile.Length);
}
此代码使用由GitHub官方维护和支持的Octokit。所有 Octokit 库均在 MIT 许可证下发布,这意味着它们可以在任何项目中修改和使用。
在查看 Pagination on the GitHub API documentation. But, you have to be careful about the API request rate as discussed here 之后,我最终做了类似以下代码的事情。
import requests
import json
url = 'https://api.github.com/repos/raspberrypi/linux/commits?access_token=<your github user OAuth token>&path=drivers/gpu/drm/i915/intel_display.c'
commits = []
i = 1
r = requests.get(url + '&per_page=100&page=' + str(i))
while len(r.json()) != 0:
commits.extend(r.json())
i += 1
r = requests.get(url + '&per_page=100&page=' + str(i))
with open('commits.txt', 'w') as outfile:
json.dump(commits, outfile)