Python 网络爬虫使用 BeautifulSoup,获取 URL 时遇到问题

Python web crawler using BeautifulSoup, trouble getting URLs

所以我正在尝试构建一个动态网络爬虫以获取链接中的所有 url 链接。 到目前为止,我能够获得章节的所有链接,但是当我尝试从每一章做章节链接时,我的输出没有打印出任何内容。

我使用的代码:

#########################Chapters#######################

import requests
from bs4 import BeautifulSoup, SoupStrainer
import re


base_url = "http://law.justia.com/codes/alabama/2015/title-{title:01d}/"

for title in range (1,4): 
url = base_url.format(title=title)
r = requests.get(url)

 for link in BeautifulSoup((r.content),"html.parser",parse_only=SoupStrainer('a')):
  if link.has_attr('href'):
    if 'chapt' in link['href']:
        href = "http://law.justia.com" + link['href']
        leveltwo(href)

#########################Sections#######################

def leveltwo(item_url):
 r = requests.get(item_url)
 soup = BeautifulSoup((r.content),"html.parser")
 section = soup.find('div', {'class': 'primary-content' })
 for sublinks in section.find_all('a'):
        sectionlinks = sublinks.get('href')
        print (sectionlinks)

通过对您的代码进行一些小的修改,我能够将其设置为 运行 并输出这些部分。主要是,您需要修复缩进,并在调用它之前定义一个函数

#########################Chapters#######################

import requests
from bs4 import BeautifulSoup, SoupStrainer
import re

def leveltwo(item_url):
    r = requests.get(item_url)
    soup = BeautifulSoup((r.content),"html.parser")
    section = soup.find('div', {'class': 'primary-content' })
    for sublinks in section.find_all('a'):
        sectionlinks = sublinks.get('href')
        print (sectionlinks)

base_url = "http://law.justia.com/codes/alabama/2015/title-{title:01d}/"

for title in range (1,4): 
    url = base_url.format(title=title)
    r = requests.get(url)

for link in BeautifulSoup((r.content),"html.parser",parse_only=SoupStrainer('a')):
    try:
        if 'chapt' in link['href']:
            href = "http://law.justia.com" + link['href']
            leveltwo(href)
        else:
            continue
    except KeyError:
        continue
#########################Sections#######################

输出:

/codes/alabama/2015/title-3/chapter-1/section-3-1-1/index.html /codes/alabama/2015/title-3/chapter-1/section-3-1-2/index.html /codes/alabama/2015/title-3/chapter-1/section-3-1-3/index.html /codes/alabama/2015/title-3/chapter-1/section-3-1-4/index.html etc.

您不需要任何 try/except 块,您可以将 href=True 与查找或 find_all 一起使用以仅 select 带有 href 或 [= 的锚标签26=] select a[href] 如下,章节链接在第一个 ul 中,在 article 标签内id #maincontent 所以你根本不需要过滤:

base_url = "http://law.justia.com/codes/alabama/2015/title-{title:01d}/"
import requests
from bs4 import BeautifulSoup

def leveltwo(item_url):
    r = requests.get(item_url)
    soup = BeautifulSoup(r.content, "html.parser")
    section_links = [a["href"] for a in soup.select('div .primary-content a[href]')]
    print (section_links)



for title in range(1, 4):
    url = base_url.format(title=title)
    r = requests.get(url)
    for link in BeautifulSoup(r.content, "html.parser").select("#maincontent ul:nth-of-type(1) a[href]"):
        href = "http://law.justia.com" + link['href']
        leveltwo(href)

如果您要使用 find_all,您只需传递 find_all(.., href=True) 即可将您的锚标签过滤为仅 select 具有 href 的标签。