Beautifulsoup 找不到全部

Beautifulsoup Cannot FindAll

我正在尝试抓取 nature.com 以对期刊文章进行一些分析。当我执行以下命令时:

import requests
from bs4 import BeautifulSoup
import re

query = "http://www.nature.com/search?journal=nature&order=date_desc"

for page in range (1, 10):
    req = requests.get(query + "&page=" + str(page))
    soup = BeautifulSoup(req.text)
    cards = soup.findAll("li", "mb20 card cleared")
    matches = re.findall('mb20 card cleared', req.text)
    print(len(cards), len(matches))

我希望 Beautifulsoup 打印“25”(搜索结果数)10 次(每页一个),但它没有。相反,它打印:

14, 25
12, 25
25, 25
15, 25 
15, 25
17, 25
17, 25
15, 25
14, 25

查看 html 源代码显示每页应返回 25 个结果,但 Beautifulsoup 似乎在这里感到困惑,我不明白为什么。

更新 1 以防万一,我运行正在 Mac OSX Mavericks using Anaconda Python 2.7.10 和 bs4 版本 4.3.1

更新 2 我添加了一个正则表达式来表明 req.text 确实包含我正在寻找的内容,但 beautifulsoup 没有找到它

更新 3 当我多次 运行 这个简单的脚本时,我有时会得到一个 "Segmentation fault: 11"。不知道为什么

differences between the parsers used by BeautifulSoup under-the-hood.

如果您没有明确指定解析器,BeautifulSoupchoose the one based on rank:

If you don’t specify anything, you’ll get the best HTML parser that’s installed. Beautiful Soup ranks lxml’s parser as being the best, then html5lib’s, then Python’s built-in parser.

明确指定解析器:

soup = BeautifulSoup(data, 'html5lib')
soup = BeautifulSoup(data, 'html.parser')
soup = BeautifulSoup(data, 'lxml')