如何从没有 class 或在 BeautifulSoup4 属性中指定 id 的网站抓取内容

How to scrape content from a website with no class or id specified in attribute with BeautifulSoup4

我想抓取单独的内容,比如 'a' 标签中的文本(即只有名称 - “42mm Architecture”)和 'scope of services, types of built projects, Locations of Built Projects, Style of work, Website' 作为 CSV 文件 headers 及其内容对于整个网页。

这些元素没有 Class 或与之关联的 ID。所以我对如何正确提取这些细节有些困惑,中间还有 'br' 和 'b' 标签。

提供的代码块前后有多个 'p' 标记。这是 website.

<h2>
  <a href="http://www.dezeen.com/tag/design-by-42mm-architecture" rel="noopener noreferrer" target="_blank">
   42mm Architecture
  </a>
  |
  <span style="color: #808080;">
   Delhi | Top Architecture Firms/ Architects in India
  </span>
 </h2>
 <!-- /wp:paragraph -->
 <p>
  <b>
   Scope of services:
  </b>
  Architecture, Interiors, Urban Design.
  <br/>
  <b>
   Types of Built Projects:
  </b>
  Residential, commercial, hospitality, offices, retail, healthcare, housing, Institutional
  <br/>
  <b>
   Locations of Built Projects:
  </b>
  New Delhi and nearby states
  <b>
   <br/>
  </b>
  <b>
   Style of work
  </b>
  <span style="font-weight: 400;">
   : Contemporary
  </span>
  <br/>
  <b>
   Website
  </b>
  <span style="font-weight: 400;">
   :
   <a href="https://www.42mm.co.in/">
    42mm.co.in
   </a>
  </span>
 </p>

那么使用 BeautifulSoup4 是如何完成的呢?

这个有点费时间!该网页不完整,标签和标识符较少。更重要的是,他们甚至没有对内容进行拼写检查 Eg. 一个地方的标题是 Scope of Services,另一个地方的标题是 Scope of services,还有更多像那样!所以我所做的是粗略提取,如果您也有分页的想法,我相信它会对您有所帮助。

import requests
from bs4 import BeautifulSoup
import csv

page = requests.get('https://www.re-thinkingthefuture.com/top-architects/top-architecture-firms-in-india-part-1/')
soup = BeautifulSoup(page.text, 'lxml')

# there are many h2 tags but we want the one without any class name
h2 = soup.find_all('h2', class_= '')

headers = []
contents = []
header_len = []
a_tags = []

for i in h2:
    if i.find_next().name == 'a':             # to make sure we do not grab the wrong tag
        a_tags.append(i.find_next().text)
        p = i.find_next_sibling()
        contents.append(p.text)
        h =[j.text for j in  p.find_all('strong')]   #  some headings were bold in the website
        headers.append(h)
        header_len.append(len(h))

# since only some headings were in bold the max number of bold would give all headers
headers = headers[header_len.index(max(header_len))]

# removing the : from headings
headers = [i[:len(i)-1] for i in headers]

# inserted a new heading
headers.insert(0, 'Firm')

# n for traversing through headers list
# k for traversing through a_tags list
n =1
k =0

# this is the difficult part where the content will have all the details in one value including the heading like this
"""
Scope of services: Architecture, Interiors, Urban Design.Types of Built Projects: Residential, commercial, hospitality, offices, retail, healthcare, housing, InstitutionalLocations of Built Projects: New Delhi and nearby statesStyle of work: ContemporaryWebsite: 42mm.co.in
"""
# thus I am splitting it using the ':' and then splicing it from the start of the each heading

contents = [i.split(':') for i in contents]
for i in contents:
    for j in i:
        h = headers[n][:5]
        if i.index(j) == 0:
            i[i.index(j)] = a_tags[k]
            n+=1
            k+=1
        elif h in j:
            i[i.index(j)] = j[:j.index(h)]
            j = j[:j.index(h)]
            if n < len(headers)-1:
                n+=1
    n =1

    # merging those extra values in the list if any
    if len(i) == 7:
        i[3] = i[3] + ' ' + i[4]
        i.remove(i[4])

# writing into csv file
# if you don't want a line space between each row then add newline = '' argument in the open function below
with open('output.csv', 'w') as f:   
    writer = csv.writer(f)
    writer.writerow(headers)
    writer.writerows(contents)

这是输出:

如果您想分页,只需将页码添加到 url 的末尾即可!

page_num = 1
while page_num <13:
    page = requests.get(f'https://www.re-thinkingthefuture.com/top-architects/top-architecture-firms-in-india-part-1/{page_num}/')

    # paste the above code starting from soup = BeautifulSoup(page.text, 'lxml')

    page_num +=1

希望对您有所帮助,如果有任何错误,请告诉我。

编辑 1: 我忘了说最重要的部分抱歉,如果有一个带有 no class 名称的标签,那么您仍然可以使用我在上面的代码中使用的标签

h2 = soup.find_all('h2', class_= '')

这只是说给我所有没有 class 名称的 h2 标签。这本身有时可以是一个唯一的标识符,因为我们使用这个 no class value 来识别它。

您可以使用此示例作为基础如何从该页面抓取信息:

import requests
import pandas as pd

url = "https://www.gov.uk/government/publications/endorsing-bodies-start-up/start-up"

soup = BeautifulSoup(requests.get(url).content, "html.parser")
parent = soup.select_one("div.govspeak")

mapping = {"sector": "sectors", "endorses businesses": "endorses businesses in"}

all_data = []
for h3 in parent.select("h3"):
    name = h3.text
    link = h3.a["href"] if h3.a else "-"

    ul = h3.find_next("ul")
    if ul and ul.find_previous("h3") == h3 and ul.parent == parent:
        li = [
            list(map(lambda x: mapping.get((i := x.strip()), i), v))
            for li in ul.select("li")
            if len(v := li.get_text(strip=True).split(":")) == 2
        ]
    else:
        li = []

    all_data.append({"name": name, "link": link, **dict(li)})


df = pd.DataFrame(all_data)
print(df)
df.to_csv("data.csv", index=False)

创建 data.csv(来自 LibreOffice 的屏幕截图):