调用外部 api 的最佳做法是什么?

What are the best practices for calling an external api?

假设我想编写一个蜘蛛,使用 Facebook API 来计算网站每个页面上的点赞数。如果我导入请求库,我可以按如下方式调用 Facebook 图 API。

import scrapy
import json
import requests

API_KEY="KEY_GOES_HERE"

class WebSite(scrapy.Spider):
    name = "website_page"
    allowed_domains = ["website.com"]
    start_urls = ['https://website.com/']
    def get_likes(self,url):
      base='https://graph.facebook.com/{}?access_token={}'.format(url,API_KEY)
      data=requests.get(base)
      return self.parse_likes(data)
    def parse_likes(self, data):
      data = json.loads(data.text)
      return data['id'],data['share']['comment_count'],data['share']['share_count']

    def parse(self, response):
        item= {}
        item['url'] = response.url
        links = response.css('a::attr(href)').extract()
        item['fb_url'],item['shares'],item['comments'] = self.get_likes(response.url)
        for link in links: 
          link = response.urljoin(link)
          item['link'] = link
          yield scrapy.Request(link, callback=self.parse)
        yield item

但是,如果我不使用请求,而是使用 scrapy.Request 调用,我似乎无法让这段代码工作。像这样的东西。

import scrapy
import json
import requests

API_KEY="KEY_GOES_HERE"

class WebSite(scrapy.Spider):
    name = "website_page"
    allowed_domains = ["website.com"]
    start_urls = ['https://website.com/']
    def get_likes(self,url):
      base='https://graph.facebook.com/{}?access_token={}'.format(url,API_KEY)
      return scrapy.Request(base,callback=self.parse_likes)
    def parse_likes(self, data):
      data = json.loads(data.text)
      return data['id'],data['share']['comment_count'],data['share']['share_count']

    def parse(self, response):
        item= {}
        links = response.css('a::attr(href)').extract()
        item['url'] = response.url
        item['fb_data']=self.get_likes(response.url).body
        for link in links: 
          link = response.urljoin(link)
          item['link'] = link
          yield scrapy.Request(link, callback=self.parse)
        yield item

在这种情况下,我只得到 Facebook 数据的空白响应。我认为我对 scrapy.Request 方法相对于标准请求库的工作方式缺少一些理解。有任何想法吗?

这是一个很常见的案例:如何从多个 url 的项目中产生?
最常见的解决方案是通过在 request.meta 参数中携带您的项目来链接请求。

对于您使用此逻辑的示例实现可能如下所示:

class WebSite(scrapy.Spider):
    base='https://graph.facebook.com/{}?access_token={}'.format
    api_key = '1234'

    def parse(self, response):
        links = response.css('a::attr(href)').extract()
        for link in links: 
            item= {}
            item['url'] = response.url
            item['fb_data']=self.get_likes(response.url).body
            item['link'] = response.urljoin(link)
            api_url = self.base(self.api_key, link)
            yield scrapy.Request(api_url,
                                 callback=self.parse_likes, 
                                 meta={'item': item})

    def parse_likes(self, response):
        item = response.meta['item']
        data = json.loads(data.text)
        share_count = data['id'],data['share']['comment_count'],data['share']['share_count']
        item['share_count'] = share_count
        yield item