如何在 python 中获取嵌套的 href?

How to get nested href in python?

目标

(我需要反复搜索几百次):

1. 在“https://www.ncbi.nlm.nih.gov/ipg/”中搜索(例如"WP_000177210.1")

(即https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1

2. Select第一条记录table

第二列"CDS Region in Nucleotide"

(即“NC_011415.1 1997353-1998831 (-)”,https://www.ncbi.nlm.nih.gov/nuccore/NC_011415.1?from=1997353&to=1998831&strand=2

3.Select"FASTA"这个序列名下

4.获取fasta序列

(即 ">NC_011415.1:c1998831-1997353 大肠杆菌 SE11,完整序列 ATGACTTTATGGATTAACGGTGACTGGATAACGGGCCAGGGCGCATCGCGTGTGAACGTAATCCGGTAT CGGGCGAG.....").

代码

1. 在“https://www.ncbi.nlm.nih.gov/ipg/”中搜索(例如"WP_000177210.1")

import requests
from bs4 import BeautifulSoup

url = "https://www.ncbi.nlm.nih.gov/ipg/"
r = requests.get(url, params = "WP_000177210.1")
if r.status_code == requests.codes.ok:
    soup = BeautifulSoup(r.text,"lxml")

2. Select第一条记录在table的第二列"CDS Region in Nucleotide"(本例为“NC_011415.1 1997353-1998831 (-)")(即 https://www.ncbi.nlm.nih.gov/nuccore/NC_011415.1?from=1997353&to=1998831&strand=2

# try 1 (wrong)
## I tried this first, but it seemed like it only accessed to the first level of the href?!
for a in soup.find_all('a', href=True):
    if (a['href'][:8]) =="/nuccore":
        print("Found the URL:", a['href'])

# try 2 (not sure how to access nested href)
## According to the label I saw in the Develop Tools, I think I need to get the href in the following nested structure. However, it didn't work.
soup.select("html div #maincontent div div div #ph-ipg div table tbody tr td a")

我现在卡在这一步了....

PS

第一次接触html格式。这也是我第一次在这里提问。我可能不会很好地表达这个问题。如果有什么不对的地方,请告诉我。

不使用 NCBI 的 REST API,

import time
from bs4 import BeautifulSoup
from selenium import webdriver

# Opens a firefox webbrowser for scrapping purposes
browser = webdriver.Firefox(executable_path=r'your\path\geckodriver.exe') # Put your own path here

# Allows you to load a page completely (with all of the JS)
browser.get('https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1')

# Delay turning the page into a soup in order to collect the newly fetched data
time.sleep(3)

# Creates the soup
soup = BeautifulSoup(browser.page_source, "html")

# Gets all the links by filtering out ones with just '/nuccore' and keeping ones that include '/nuccore'
links = [a['href'] for a in soup.find_all('a', href=True) if '/nuccore' in a['href'] and not a['href'] == '/nuccore']

注:

You'll need the package selenium

You'll need to install GeckoDriver