通过 RVest 抓取数据
Scrape Data through RVest
我希望从 https://www.inquirer.net/article-index?d=2020-6-13
中按类别获取文章名称
我尝试通过以下方式阅读文章名称:
library('rvest')
year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")
pg <- read_html(url)
test<-pg %>%
html_nodes("#index-wrap") %>%
html_text()
这个returns所有文章的名字只有一串,而且很乱。
我最终想要一个如下所示的数据框:
Date Category Article Name
2020-06-13 News ‘We can never let our guard down’ vs terrorism – Cayetano
2020-06-13 News PNP spox says mañanita remark did not intend to put Sinas in bad light
2020-06-13 News After stranded mom’s death, Pasay LGU helps over 400 stranded individuals
2020-06-13 World 4 dead after tanker truck explodes on highway in China
etc.
etc.
etc.
etc.
2020-06-13 Lifestyle Book: Melania Trump delayed 2017 move to DC to get new prenup
有谁知道我可能遗漏了什么?对此很陌生,谢谢!
你忘记了 read_html() 然后在 dplyr 语句中使用它
library('rvest')
year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")
#added page
page <- read_html(url)
test <- page %>%
#changed xpath
html_node(xpath = '//*[@id ="index-wrap"]') %>%
html_text()
test
更新,我在 dplyr 上很烂,但这是我睡觉前的东西
library('rvest')
year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")
#addad page
page <- read_html(url)
titles <- page %>%
html_nodes(xpath = '//*[@id ="index-wrap"]/h4') %>%
html_text()
sections <- page %>%
html_nodes(xpath = '//*[@id ="index-wrap"]/ul')
stories <- sections %>%
html_nodes(xpath = '//li/a') %>%
html_text()
stories
这可能是您能得到的最接近的:
library(rvest)
#> Loading required package: xml2
library(tibble)
year <- 2020
month <- 06
day <- 13
url <- paste0('http://www.inquirer.net/article-index?d=', year, '-', month, '-', day)
div <- read_html(url) %>% html_node(xpath = '//*[@id ="index-wrap"]')
links <- html_nodes(div, xpath = '//a[@rel = "bookmark"]')
post_date <- html_nodes(div, xpath = '//span[@class = "index-postdate"]') %>%
html_text()
test <- tibble(date = post_date,
text = html_text(links),
link = html_attr(links, "href"))
test
#> # A tibble: 261 x 3
#> date text link
#> <chr> <chr> <chr>
#> 1 1 day a~ ‘We can never let our guard down~ https://newsinfo.inquirer.net/129~
#> 2 1 day a~ PNP spox says mañanita remark di~ https://newsinfo.inquirer.net/129~
#> 3 1 day a~ After stranded mom’s death, Pasa~ https://newsinfo.inquirer.net/129~
#> 4 1 day a~ Putting up lining for bike lanes~ https://newsinfo.inquirer.net/129~
#> 5 1 day a~ PH Army provides accommodation f~ https://newsinfo.inquirer.net/129~
#> 6 1 day a~ DA: Local poultry production suf~ https://newsinfo.inquirer.net/129~
#> 7 1 day a~ IATF assessing proposed design t~ https://newsinfo.inquirer.net/129~
#> 8 1 day a~ PCSO lost ‘most likely’ P13B dur~ https://newsinfo.inquirer.net/129~
#> 9 2 days ~ DOH: No IATF recommendations yet~ https://newsinfo.inquirer.net/129~
#> 10 2 days ~ PH coronavirus cases exceed 25,0~ https://newsinfo.inquirer.net/129~
#> # ... with 251 more rows
由 reprex package (v0.3.0)
于 2020 年 6 月 14 日创建
我希望从 https://www.inquirer.net/article-index?d=2020-6-13
中按类别获取文章名称我尝试通过以下方式阅读文章名称:
library('rvest')
year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")
pg <- read_html(url)
test<-pg %>%
html_nodes("#index-wrap") %>%
html_text()
这个returns所有文章的名字只有一串,而且很乱。
我最终想要一个如下所示的数据框:
Date Category Article Name
2020-06-13 News ‘We can never let our guard down’ vs terrorism – Cayetano
2020-06-13 News PNP spox says mañanita remark did not intend to put Sinas in bad light
2020-06-13 News After stranded mom’s death, Pasay LGU helps over 400 stranded individuals
2020-06-13 World 4 dead after tanker truck explodes on highway in China
etc.
etc.
etc.
etc.
2020-06-13 Lifestyle Book: Melania Trump delayed 2017 move to DC to get new prenup
有谁知道我可能遗漏了什么?对此很陌生,谢谢!
你忘记了 read_html() 然后在 dplyr 语句中使用它
library('rvest')
year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")
#added page
page <- read_html(url)
test <- page %>%
#changed xpath
html_node(xpath = '//*[@id ="index-wrap"]') %>%
html_text()
test
更新,我在 dplyr 上很烂,但这是我睡觉前的东西
library('rvest')
year <- 2020
month <- 06
day <- 13
url <- paste('http://www.inquirer.net/article-index?d=', year, '-', month, '-',day, sep = "")
#addad page
page <- read_html(url)
titles <- page %>%
html_nodes(xpath = '//*[@id ="index-wrap"]/h4') %>%
html_text()
sections <- page %>%
html_nodes(xpath = '//*[@id ="index-wrap"]/ul')
stories <- sections %>%
html_nodes(xpath = '//li/a') %>%
html_text()
stories
这可能是您能得到的最接近的:
library(rvest)
#> Loading required package: xml2
library(tibble)
year <- 2020
month <- 06
day <- 13
url <- paste0('http://www.inquirer.net/article-index?d=', year, '-', month, '-', day)
div <- read_html(url) %>% html_node(xpath = '//*[@id ="index-wrap"]')
links <- html_nodes(div, xpath = '//a[@rel = "bookmark"]')
post_date <- html_nodes(div, xpath = '//span[@class = "index-postdate"]') %>%
html_text()
test <- tibble(date = post_date,
text = html_text(links),
link = html_attr(links, "href"))
test
#> # A tibble: 261 x 3
#> date text link
#> <chr> <chr> <chr>
#> 1 1 day a~ ‘We can never let our guard down~ https://newsinfo.inquirer.net/129~
#> 2 1 day a~ PNP spox says mañanita remark di~ https://newsinfo.inquirer.net/129~
#> 3 1 day a~ After stranded mom’s death, Pasa~ https://newsinfo.inquirer.net/129~
#> 4 1 day a~ Putting up lining for bike lanes~ https://newsinfo.inquirer.net/129~
#> 5 1 day a~ PH Army provides accommodation f~ https://newsinfo.inquirer.net/129~
#> 6 1 day a~ DA: Local poultry production suf~ https://newsinfo.inquirer.net/129~
#> 7 1 day a~ IATF assessing proposed design t~ https://newsinfo.inquirer.net/129~
#> 8 1 day a~ PCSO lost ‘most likely’ P13B dur~ https://newsinfo.inquirer.net/129~
#> 9 2 days ~ DOH: No IATF recommendations yet~ https://newsinfo.inquirer.net/129~
#> 10 2 days ~ PH coronavirus cases exceed 25,0~ https://newsinfo.inquirer.net/129~
#> # ... with 251 more rows
由 reprex package (v0.3.0)
于 2020 年 6 月 14 日创建