如何使用 BeautifulSoup 提取网站中的所有 URL

How extract all URLs in a website using BeautifulSoup

我正在做一个需要从网站中提取所有链接的项目, 通过使用此代码,我将从单个 URL:

获取所有链接
import requests
from bs4 import BeautifulSoup, SoupStrainer

source_code = requests.get('https://whosebug.com/')
soup = BeautifulSoup(source_code.content, 'lxml')
links = []

for link in soup.find_all('a'):
    links.append(str(link))

问题是,如果我想提取所有 URL,我必须编写另一个 for 循环,然后再编写一个...。 我想提取本网站及其子域中存在的所有 URL。 有没有办法不用写 nested for 就可以做到这一点? 即使写嵌套 for,我也不知道我应该使用多少 for 来获取所有 URLs.

好吧,实际上你所要求的是可能的,但这意味着一个无限循环将保持 运行 和 运行 直到你的记忆 BoOoOoOm

总之思路应该是这样的。

  • 您将使用 for item in soup.findAll('a')then item.get('href')

  • 添加到 set 以去除重复的 url 并与 if 一起使用 条件 is not None 摆脱 None 个对象。

  • 然后不断循环直到你的set变成0 喜欢 len(urls)

WoW,找到解决方案大约需要 30 分钟, 我找到了一种简单高效的方法来做到这一点, 正如@αūɱҽ-αмєяιcαη 提到的,有时如果您的网站链接到 google 等大网站,它不会停止,直到您的内存充满数据。 所以您应该考虑一些步骤。

  1. 做一个 while 循环来搜索你的网站以提取所有 url
  2. 使用异常处理来防止崩溃
  3. 删除重复项并分隔网址
  4. 设置网址数量限制,比如找到 1000 个网址时
  5. 停止 while 循环以防止您的 PC 内存变满

这里有一个示例代码,它应该可以正常工作,我实际测试过它,对我来说很有趣:

import requests
from bs4 import BeautifulSoup
import re
import time

source_code = requests.get('https://whosebug.com/')
soup = BeautifulSoup(source_code.content, 'lxml')
data = []
links = []


def remove_duplicates(l): # remove duplicates and unURL string
    for item in l:
        match = re.search("(?P<url>https?://[^\s]+)", item)
        if match is not None:
            links.append((match.group("url")))


for link in soup.find_all('a', href=True):
    data.append(str(link.get('href')))
flag = True
remove_duplicates(data)
while flag:
    try:
        for link in links:
            for j in soup.find_all('a', href=True):
                temp = []
                source_code = requests.get(link)
                soup = BeautifulSoup(source_code.content, 'lxml')
                temp.append(str(j.get('href')))
                remove_duplicates(temp)

                if len(links) > 162: # set limitation to number of URLs
                    break
            if len(links) > 162:
                break
        if len(links) > 162:
            break
    except Exception as e:
        print(e)
        if len(links) > 162:
            break

for url in links:
print(url)

输出将是:

https://whosebug.com
https://www.Whosebugbusiness.com/talent
https://www.Whosebugbusiness.com/advertising
https://whosebug.com/users/login?ssrc=head&returnurl=https%3a%2f%2fwhosebug.com%2f
https://whosebug.com/users/signup?ssrc=head&returnurl=%2fusers%2fstory%2fcurrent
https://whosebug.com
https://whosebug.com
https://whosebug.com/help
https://chat.whosebug.com
https://meta.whosebug.com
https://whosebug.com/users/signup?ssrc=site_switcher&returnurl=%2fusers%2fstory%2fcurrent
https://whosebug.com/users/login?ssrc=site_switcher&returnurl=https%3a%2f%2fwhosebug.com%2f
https://stackexchange.com/sites
https://Whosebug.blog
https://whosebug.com/legal/cookie-policy
https://whosebug.com/legal/privacy-policy
https://whosebug.com/legal/terms-of-service/public
https://whosebug.com/teams
https://whosebug.com/teams
https://www.Whosebugbusiness.com/talent
https://www.Whosebugbusiness.com/advertising
https://www.g2.com/products/stack-overflow-for-teams/
https://www.g2.com/products/stack-overflow-for-teams/
https://www.fastcompany.com/most-innovative-companies/2019/sectors/enterprise
https://www.Whosebugbusiness.com/talent
https://www.Whosebugbusiness.com/advertising

https://insights.whosebug.com/
https://whosebug.com
https://whosebug.com
https://whosebug.com/jobs
https://whosebug.com/jobs/directory/developer-jobs
https://whosebug.com/jobs/salary
https://www.Whosebugbusiness.com
https://whosebug.com/teams
https://www.Whosebugbusiness.com/talent
https://www.Whosebugbusiness.com/advertising
https://whosebug.com/enterprise
https://whosebug.com/company/about
https://whosebug.com/company/about
https://whosebug.com/company/press
https://whosebug.com/company/work-here
https://whosebug.com/legal
https://whosebug.com/legal/privacy-policy
https://whosebug.com/company/contact
https://stackexchange.com
https://whosebug.com
https://serverfault.com
https://superuser.com
https://webapps.stackexchange.com
https://askubuntu.com
https://webmasters.stackexchange.com
https://gamedev.stackexchange.com
https://tex.stackexchange.com
https://softwareengineering.stackexchange.com
https://unix.stackexchange.com
https://apple.stackexchange.com
https://wordpress.stackexchange.com
https://gis.stackexchange.com
https://electronics.stackexchange.com
https://android.stackexchange.com
https://security.stackexchange.com
https://dba.stackexchange.com
https://drupal.stackexchange.com
https://sharepoint.stackexchange.com
https://ux.stackexchange.com
https://mathematica.stackexchange.com
https://salesforce.stackexchange.com
https://expressionengine.stackexchange.com
https://pt.whosebug.com
https://blender.stackexchange.com
https://networkengineering.stackexchange.com
https://crypto.stackexchange.com
https://codereview.stackexchange.com
https://magento.stackexchange.com
https://softwarerecs.stackexchange.com
https://dsp.stackexchange.com
https://emacs.stackexchange.com
https://raspberrypi.stackexchange.com
https://ru.whosebug.com
https://codegolf.stackexchange.com
https://es.whosebug.com
https://ethereum.stackexchange.com
https://datascience.stackexchange.com
https://arduino.stackexchange.com
https://bitcoin.stackexchange.com
https://sqa.stackexchange.com
https://sound.stackexchange.com
https://windowsphone.stackexchange.com
https://stackexchange.com/sites#technology
https://photo.stackexchange.com
https://scifi.stackexchange.com
https://graphicdesign.stackexchange.com
https://movies.stackexchange.com
https://music.stackexchange.com
https://worldbuilding.stackexchange.com
https://video.stackexchange.com
https://cooking.stackexchange.com
https://diy.stackexchange.com
https://money.stackexchange.com
https://academia.stackexchange.com
https://law.stackexchange.com
https://fitness.stackexchange.com
https://gardening.stackexchange.com
https://parenting.stackexchange.com
https://stackexchange.com/sites#lifearts
https://english.stackexchange.com
https://skeptics.stackexchange.com
https://judaism.stackexchange.com
https://travel.stackexchange.com
https://christianity.stackexchange.com
https://ell.stackexchange.com
https://japanese.stackexchange.com
https://chinese.stackexchange.com
https://french.stackexchange.com
https://german.stackexchange.com
https://hermeneutics.stackexchange.com
https://history.stackexchange.com
https://spanish.stackexchange.com
https://islam.stackexchange.com
https://rus.stackexchange.com
https://russian.stackexchange.com
https://gaming.stackexchange.com
https://bicycles.stackexchange.com
https://rpg.stackexchange.com
https://anime.stackexchange.com
https://puzzling.stackexchange.com
https://mechanics.stackexchange.com
https://boardgames.stackexchange.com
https://bricks.stackexchange.com
https://homebrew.stackexchange.com
https://martialarts.stackexchange.com
https://outdoors.stackexchange.com
https://poker.stackexchange.com
https://chess.stackexchange.com
https://sports.stackexchange.com
https://stackexchange.com/sites#culturerecreation
https://mathoverflow.net
https://math.stackexchange.com
https://stats.stackexchange.com
https://cstheory.stackexchange.com
https://physics.stackexchange.com
https://chemistry.stackexchange.com
https://biology.stackexchange.com
https://cs.stackexchange.com
https://philosophy.stackexchange.com
https://linguistics.stackexchange.com
https://psychology.stackexchange.com
https://scicomp.stackexchange.com
https://stackexchange.com/sites#science
https://meta.stackexchange.com
https://stackapps.com
https://api.stackexchange.com
https://data.stackexchange.com
https://Whosebug.blog?blb=1
https://www.facebook.com/officialWhosebug/
https://twitter.com/Whosebug
https://linkedin.com/company/stack-overflow
https://creativecommons.org/licenses/by-sa/4.0/
https://Whosebug.blog/2009/06/25/attribution-required/
https://whosebug.com
https://www.Whosebugbusiness.com/talent
https://www.Whosebugbusiness.com/advertising

Process finished with exit code 0

我设置的限制是162,你想增加多少都可以,你允许RAM。

这个怎么样?

import re,requests
from simplified_scrapy.simplified_doc import SimplifiedDoc 
source_code = requests.get('https://whosebug.com/')
doc = SimplifiedDoc(source_code.content.decode('utf-8')) # incoming HTML string
lst = doc.listA(url='https://whosebug.com/') # get all links
for a in lst:
  if(a['url'].find('whosebug.com')>0): #sub domains
    print (a['url'])

你也可以使用这个抓取框架,它可以帮助你做很多事情

from simplified_scrapy.spider import Spider, SimplifiedDoc
class DemoSpider(Spider):
  name = 'demo-spider'
  start_urls = ['http://www.example.com/']
  allowed_domains = ['example.com/']
  def extract(self, url, html, models, modelNames):
    doc = SimplifiedDoc(html)
    lstA = doc.listA(url=url["url"])
    return [{"Urls": lstA, "Data": None}]

from simplified_scrapy.simplified_main import SimplifiedMain
SimplifiedMain.startThread(DemoSpider())