网络爬虫 Python Beautifulsoup
Webscraper Python Beautifulsoup
我知道我想做的是最简单的,但它让我大吃一惊。我想使用 BeautifulSoup 从 HTML 页面 (https://partner.microsoft.com/en-us/membership/application-development-competency) 中提取数据。为此,我想我需要使用 .find() 函数。不知道该怎么办了。感谢各种形式的帮助。
这是我正在使用的 HTML:
[在此处输入图片描述][1]
[1]: https://i.stack.imgur.com/sHAMF.png
import requests
from bs4 import BeautifulSoup
url = 'https://partner.microsoft.com/en-us/membership/application-development-competency'
res = requests.get(url)
html_page = res.content
soup = BeautifulSoup(html_page, 'html.parser')
text = soup.find("div",{"class":"col-md4[2]"})
output = ''
blacklist = [
'style',
'head',
'meta',
'col-md4[0]',
'col-md4[1]',
]
for t in text:
if t.parent.name not in blacklist:
output += '{} '.format(t)
sheet = '<html><body>' + text + '</body></html>';
file_object = open("record.html", "w+");
file_object.write(sheet);
file_object.close();
[在此处输入图片描述][1]
[1]: https://i.stack.imgur.com/sHAMF.png
您可以这样做,但是我认为对于这个网站最好使用 XPath
import requests
from bs4 import BeautifulSoup
url = 'https://partner.microsoft.com/en-us/membership/application-development-competency'
response = requests.get(url)
soup = BeautifulSoup(response.text, features="lxml")
cols = soup.find_all("div", class_="col-md-4")
print(cols[6].getText())
我知道我想做的是最简单的,但它让我大吃一惊。我想使用 BeautifulSoup 从 HTML 页面 (https://partner.microsoft.com/en-us/membership/application-development-competency) 中提取数据。为此,我想我需要使用 .find() 函数。不知道该怎么办了。感谢各种形式的帮助。 这是我正在使用的 HTML: [在此处输入图片描述][1] [1]: https://i.stack.imgur.com/sHAMF.png
import requests
from bs4 import BeautifulSoup
url = 'https://partner.microsoft.com/en-us/membership/application-development-competency'
res = requests.get(url)
html_page = res.content
soup = BeautifulSoup(html_page, 'html.parser')
text = soup.find("div",{"class":"col-md4[2]"})
output = ''
blacklist = [
'style',
'head',
'meta',
'col-md4[0]',
'col-md4[1]',
]
for t in text:
if t.parent.name not in blacklist:
output += '{} '.format(t)
sheet = '<html><body>' + text + '</body></html>';
file_object = open("record.html", "w+");
file_object.write(sheet);
file_object.close();
[在此处输入图片描述][1] [1]: https://i.stack.imgur.com/sHAMF.png
您可以这样做,但是我认为对于这个网站最好使用 XPath
import requests
from bs4 import BeautifulSoup
url = 'https://partner.microsoft.com/en-us/membership/application-development-competency'
response = requests.get(url)
soup = BeautifulSoup(response.text, features="lxml")
cols = soup.find_all("div", class_="col-md-4")
print(cols[6].getText())