计算 R 中标记之间的单词

counting words between markers in R

我将几个文本文件导入到语料库中。每篇文章都有几个部分,据说是在不同的日子写的,并用#标记。一周用 $ 标记。在每篇文章中,我如何计算一天中有多少个单词以及一周中有多少个单词? 文本 T1 的末尾标有#,我需要计算每天的单词数。周由 $ 分隔,我还需要知道一周中的单词数 我还有文本 T2 和 T3 ...Tn 问题是我如何在 R 中使用 quanteda

<T1>
 (25.02.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage.                                                        

# (26.02.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                       

# (28.02.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.              
# (02.03.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. .                                           

# (03.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                                    

#
($)
 (04.03.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.                                      

# (05.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.  
# (06.03.2009)  This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. 

# (07.03.2009)  This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. 

# (08.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                    

# (09.03.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.                          

# (10.03.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage.                             

#
($)

这些文字看起来很眼熟!

如果我将你上面的内容分配给 txt,那么你可以将其包装在 quanteda 语料库中,然后使用 corpus_segment() 将其拆分标签。

library("quanteda")
## Package version: 1.5.0

corp <- corpus(txt) %>%
  corpus_segment(pattern = "($)", valuetype = "fixed", pattern_position = "after") %>%
  corpus_segment(pattern = "\(\d{2}\.\d{2}\.\d{4}\)", valuetype = "regex", pattern_position = "before")

第一个分段沿 "weeks" 拆分,但由于那里没有标签,我们只需再次分段以获取日期。这会产生:

sapply(head(texts(corp)), substring, 1, 100)
##                                                                                                text1.1.1 
## "This chapter thoroughly describes the idea of analyzing text \"as data\" with a social science focus. " 
##                                                                                                text1.1.2 
##   "Probabilistic methods for classifying text form a rich tradition in machine learning and natural lan" 
##                                                                                                text1.1.3 
## "Borrowing from automated \"text as data\" approaches, we show how statistical scaling models can be ap" 
##                                                                                                text1.1.4 
## "This chapter thoroughly describes the idea of analyzing text \"as data\" with a social science focus. " 
##                                                                                                text1.1.5 
##   "Probabilistic methods for classifying text form a rich tradition in machine learning and natural lan" 
##                                                                                                text1.2.1 
## "Borrowing from automated \"text as data\" approaches, we show how statistical scaling models can be ap"

最好整理提取的标签并将其变成实际日期,您以后可以使用它来拆分成周或您想要的任何其他日期范围。

# tidy up docvars
names(docvars(corp))[1] <- "date"
docvars(corp, "date") <-
  stringi::stri_replace_all_fixed(docvars(corp, "date"), c("(", ")"), c("", ""), vectorize_all = FALSE) %>%
  lubridate::dmy()

summary(corp)
## Corpus consisting of 12 documents:
## 
##       Text Types Tokens Sentences       date
##  text1.1.1    83    135         6 2009-02-25
##  text1.1.2   119    195         7 2009-02-26
##  text1.1.3    96    137         5 2009-02-28
##  text1.1.4    83    136         6 2009-03-02
##  text1.1.5   119    195         7 2009-03-03
##  text1.2.1    96    137         5 2009-03-04
##  text1.2.2   119    195         7 2009-03-05
##  text1.2.3    83    135         6 2009-03-06
##  text1.2.4    83    135         6 2009-03-07
##  text1.2.5   119    195         7 2009-03-08
##  text1.2.6    96    137         5 2009-03-09
##  text1.2.7    83    135         6 2009-03-10
## 
## Source: /private/var/folders/1v/ps2x_tvd0yg0lypdlshg_vwc0000gp/T/RtmpDG9tad/reprexd97c6e16bef8/* on x86_64 by kbenoit
## Created: Sun Jul 28 11:29:45 2019
## Notes: corpus_segment.corpus(., pattern = "\(\d{2}\.\d{2}\.\d{4}\)", valuetype = "regex", pattern_position = "before")