使用 python 使用 Selenium 进行网页抓取 - 不检索所有元素

Web scraping using Selenium using python - not retrieving all elements

我正在尝试使用 Selenium 进行网络抓取 coinmarketcap.com,但我只能检索列表中的前 10 个山寨币。我读到 //div[contains(concat(' ', normalize-space(@class), ' '), 'class name')] 应该可以解决问题,但它不工作。有人能帮我吗?我也知道 coinmarketcap 作为 api,但我只是想尝试另一种方式。


driver = webdriver.Chrome(r'C:\Users\Ejer\PycharmProjects\pythonProject\chromedriver')
driver.get('https://coinmarketcap.com/')

Crypto = driver.find_elements_by_xpath("//div[contains(concat(' ', normalize-space(@class), ' '), 'sc-16r8icm-0 sc-1teo54s-1 lgwUsc')]")
#price = driver.find_elements_by_xpath('//td[@class="cmc-link"]')
#coincap = driver.find_elements_by_xpath('//td[@class="DAY"]')

CMC_list = []
for c in range(len(Crypto)):
    CMC_list.append(Crypto[c].text)
print(CMC_list)

driver.close()

要检索列表中的前 10 个山寨币,您需要诱导 for the visibility_of_all_elements_located() and you can use either of the following :

  • 使用 CSS_SELECTORget_attribute("innerHTML"):

    driver.get('https://coinmarketcap.com/')
    print([my_elem.get_attribute("innerHTML") for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "table.cmc-table tbody tr td > a p[color='text']")))[:10]])
    
  • 使用 XPATHtext 属性:

    driver.get('https://coinmarketcap.com/')
    print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//table[contains(@class, 'cmc-table')]//tbody//tr//td/a//p[@color='text']")))[:10]])
    
  • 控制台输出:

    ['Bitcoin', 'Ethereum', 'XRP', 'Tether', 'Litecoin', 'Bitcoin Cash', 'Chainlink', 'Cardano', 'Polkadot', 'Binance Coin']
    
  • 注意:您必须添加以下导入:

    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.common.by import By
    from selenium.webdriver.support import expected_conditions as EC