使用 python 抓取 AJAX 加载的内容?

Scraping AJAX loaded content with python?

所以我有一个函数,当我点击一个按钮时调用,如下所示

var min_news_id = "68feb985-1d08-4f5d-8855-cb35ae6c3e93-1";
function loadMoreNews(){
  $("#load-more-btn").hide();
  $("#load-more-gif").show();
  $.post("/en/ajax/more_news",{'category':'','news_offset':min_news_id},function(data){
      data = JSON.parse(data);
      min_news_id = data.min_news_id||min_news_id;
      $(".card-stack").append(data.html);
  })
  .fail(function(){alert("Error : unable to load more news");})
  .always(function(){$("#load-more-btn").show();$("#load-more-gif").hide();});
}
jQuery.scrollDepth();

现在我对 javascript 没有太多经验,但我假设它从 "en/ajax/more_news" 的某种 api 返回一些 json 数据。

有什么方法可以直接调用此 api 并从我的 python 脚本中获取 json 数据。如果是,怎么办?

如果不是,我该如何抓取正在生成的内容?

您需要 post 您在脚本中看到的新闻 ID https://www.inshorts.com/en/ajax/more_news, this is an example using requests:

from bs4 import BeautifulSoup
import requests
import re

# pattern to extract min_news_id
patt = re.compile('var min_news_id\s+=\s+"(.*?)"')

with requests.Session() as s:
    soup = BeautifulSoup(s.get("https://www.inshorts.com/en/read").content)
    new_id_scr = soup.find("script", text=re.compile("var\s+min_news_id"))
    print(new_id_scr.text)
    news_id = patt.search(new_id_scr.text).group()
    js = s.post("https://www.inshorts.com/en/ajax/more_news", data={"news_offset":news_id})
    print(js.json())

js 给你所有的 html,你只需要访问 js["html"].

这里是自动循环遍历inshort.com

中所有页面的脚本
from bs4 import BeautifulSoup
from newspaper import Article
import requests
import sys
import re
import json

patt = re.compile('var min_news_id\s+=\s+"(.*?)"')
i = 0
while(1):
    with requests.Session() as s:
        if(i==0):soup = BeautifulSoup(s.get("https://www.inshorts.com/en/read").content,"lxml")
  new_id_scr = soup.find("script", text=re.compile("var\s+min_news_id"))
   news_id = patt.search(new_id_scr.text).group(1)

    js = s.post("https://www.inshorts.com/en/ajax/more_news", data={"news_offset":news_id})
    jsn = json.dumps(js.json())
    jsonToPython = json.loads(jsn)
    news_id = jsonToPython["min_news_id"]
    data = jsonToPython["html"]
    i += 1
    soup = BeautifulSoup(data, "lxml")
    for tag in soup.find_all("div", {"class":"news-card"}):
        main_text = tag.find("div", {"itemprop":"articleBody"})
        summ_text = main_text.text
        summ_text = summ_text.replace("\n", " ")
        result = tag.find("a", {"class":"source"})
        art_url = result.get('href') 
        if 'www.youtube.com' in art_url:
            print("Nothing")
        else:
            art_url = art_url[:-1]
            #print("Hello", art_url)
            article = Article(art_url)
            article.download()
            if article.is_downloaded:
                article.parse()
                article_text = article.text
                article_text = article_text.replace("\n", " ")

                print(article_text+"\n")
                print(summ_text+"\n")        

它提供来自 inshort.com 的摘要和来自各自 新闻频道 [=] 的 完整新闻 20=].