计算大型数据框中的单词和词干 (RStudio)

Counting words and word stems in a large dataframe (RStudio)

我有一个由推文组成的大型数据框,以及一个作为列表加载的关键字词典,其中包含与情感相关的单词和词干 (kw_Emo)。 我需要找到一种方法来计算来自 kw_Emo 的任何给定 word/word 词干在每条推文中出现的次数。kw_Emo 中,词干是用星号 (*) 标记。例如,词干是 ador*,这意味着我需要考虑 adorableadoreadoring 或 [=68] 的任何字母模式=]从 ador… 开始 。


从之前的 Stack Overflow 讨论中(参见我个人资料中的上一个问题),以下解决方案对我有很大帮助,但它只计算完全匹配的字符(例如,仅 ador,而不是 adorable):

  1. 加载相关包。

    library(stringr)

  2. kw_Emo.

    的词干中识别并删除 *

    for (x in 1:length(kw_Emo)) { if (grepl("[*]", kw_Emo[x]) == TRUE) { kw_Emo[x] <- substr(kw_Emo[x],1,nchar(kw_Emo[x])-1) } }

  3. 创建新列,每个 word/word 来自 kw_Emo,默认值为 0。

    for (x in 1:length(keywords)) { dataframe[, keywords[x]] <- 0}

  4. 将每条推文拆分为单词向量,查看关键字是否等于任何关键字,将 +1 添加到适当的 word/word 词干列。

    for (x in 1:nrow(dataframe)) { partials <- data.frame(str_split(dataframe[x,2], " "), stringsAsFactors=FALSE) partials <- partials[partials[] != ""] for(y in 1:length(partials)) { for (z in 1:length(keywords)) { if (keywords[z] == partials[y]) { dataframe[x, keywords[z]] <- dataframe[x, keywords[z]] + 1 } } } }

有没有办法改变这个解决方案来解释词干?我想知道是否有可能首先使用 stringr 模式用确切的字符替换出现的词干,然后使用这个完全匹配的解决方案。例如,stringr::str_replace_all(x, "ador[a-z]+", "ador")。但我不确定如何使用我的大词典和众多词干来做到这一点。也许删除 [*] 的循环基本上可以识别所有词干,可以以某种方式进行调整?


这是我的数据框的可重现示例,名为 TestTweets,其中包含要在名为 clean_text:

的列中分析的文本

dput(droplevels(head(TestTweets, 20)))

structure(list(Time = c("24/06/2016 10:55:04", "24/06/2016 10:55:04", 
"24/06/2016 10:55:04", "24/06/2016 10:55:04", "24/06/2016 10:55:04", 
"24/06/2016 10:55:04", "24/06/2016 10:55:04", "24/06/2016 10:55:04", 
"24/06/2016 10:55:04", "24/06/2016 10:55:04", "24/06/2016 10:55:04", 
"24/06/2016 10:55:04", "24/06/2016 10:55:04", "24/06/2016 10:55:04", 
"24/06/2016 10:55:04", "24/06/2016 10:55:04", "24/06/2016 10:55:04", 
"24/06/2016 10:55:04", "24/06/2016 10:55:03", "24/06/2016 10:55:03"
), clean_text = c("mayagoodfellow as always making sense of it all for us ive never felt less welcome in this country brexit  httpstcoiai5xa9ywv", 
"never underestimate power of stupid people in a democracy brexit", 
"a quick guide to brexit and beyond after britain votes to quit eu httpstcos1xkzrumvg httpstcocniutojkt0", 
"this selfinflicted wound will be his legacy cameron falls on sword after brexit euref httpstcoegph3qonbj httpstcohbyhxodeda", 
"so the uk is out cameron resigned scotland wants to leave great britain sinn fein plans to unify ireland and its o", 
"this is a very good summary no biasspinagenda of the legal ramifications of the leave result brexit httpstcolobtyo48ng", 
"you cant make this up cornwall votes out immediately pleads to keep eu cash this was never a rehearsal httpstco", 
"no matter the outcome brexit polls demonstrate how quickly half of any population can be convinced to vote against itself q", 
"i wouldnt mind so much but the result is based on a pack of lies and unaccountable promises democracy didnt win brexit pro", 
"so the uk is out cameron resigned scotland wants to leave great britain sinn fein plans to unify ireland and its o", 
"absolutely brilliant poll on brexit by yougov httpstcoepevg1moaw", 
"retweeted mikhail golub golub\r\n\r\nbrexit to be followed by grexit departugal italeave fruckoff czechout httpstcoavkpfesddz", 
"think the brexit campaign relies on the same sort of logic that drpepper does whats the worst that can happen thingsthatarewellbrexit", 
"am baffled by nigel farages claim that brexit is a victory for real people as if the 47 voting remain are fucking smu", 
"not one of the uks problems has been solved by brexit vote migration inequality the uks centurylong decline as", 
"scotland should never leave eu  calls for new independence vote grow httpstcorudiyvthia brexit", 
"the most articulate take on brexit is actually this ft reader comment today httpstco98b4dwsrtv", 
"65 million refugees half of them are children  maybe instead of fighting each other we should be working hand in hand ", 
"im laughing at people who voted for brexit but are complaining about the exchange rate affecting their holiday\r\nremain", 
"life is too short to wear boring shoes  brexit")), .Names = c("Time", 
"clean_text"), row.names = c(NA, 20L), class = c("tbl_df", "tbl", 
"data.frame"))

这里是kw_Emo:

kw_Emo <- c("abusi*", "accept", "accepta*", "accepted", "accepting", "accepts", "ache*", "aching", "active*", "admir*", "ador*", "advantag*", "adventur*", "advers*", "affection*", "afraid", "aggravat*", "aggress*", "agoniz*", "agony", "agree", "agreeab*", "agreed", "agreeing", "agreement*", "agrees", "alarm*", "alone", "alright*", "amaz*", "amor*", "amus*", "anger*", "angr*", "anguish*", "annoy*", "antagoni*", "anxi*", "aok", "apath*", "appall*", "appreciat*", "apprehens*", "argh*", "argu*", "arrogan*", "asham*", "assault*", "asshole*", "assur*", "attachment*", "attract*", "aversi*", "avoid*", "award*", "awesome", "awful", "awkward*", "bashful*", "bastard*", "battl*", "beaten", "beaut*", "beloved", "benefic*", "benevolen*", "benign*", "best", "better", "bitch*", "bitter*", "blam*", "bless*", "bold*", "bonus*", "bore*", "boring", "bother*", "brave*", "bright*", "brillian*", "broke", "burden*", "calm*", "cared", "carefree", "careful*", "careless*", "cares", "casual", "casually", "certain*", "challeng*", "champ*", "charit*", "charm*", "cheer*", "cherish*", "chuckl*", "clever*", "comed*", "comfort*", "commitment*", "complain*", "compliment*", "concerned", "confidence", "confident", "confidently", "confront*", "confus*", "considerate", "contempt*", "contented*", "contentment", "contradic*", "convinc*", "cool", "courag*", "crap", "crappy", "craz*", "create*", "creati*", "credit*", "cried", "cries", "critical", "critici*", "crude*", "cry", "crying", "cunt*", "cut", "cute*", "cutie*", "cynic", "danger*", "daring", "darlin*", "daze*", "dear*", "decay*", "defeat*", "defect*", "definite", "definitely", "degrad*", "delectabl*", "delicate*", "delicious*", "deligh*", "depress*", "depriv*", "despair*", "desperat*", "despis*", "destruct*", "determina*", "determined", "devastat*", "difficult*", "digni*", "disadvantage*", "disagree*", "disappoint*", "disaster*", "discomfort*", "discourag*", "dishearten*", "disillusion*", "dislike", "disliked", "dislikes", "disliking", "dismay*", "dissatisf*", "distract*", "distraught", "distress*", "distrust*", "disturb*", "divin*", "domina*", "doom*", "dork*", "doubt*", "dread*", "dull*", "dumb*", "dump*", "dwell*", "dynam*", "eager*", "ease*", "easie*", "easily", "easiness", "easing", "easy*", "ecsta*", "efficien*", "egotis*", "elegan*", "embarrass*", "emotion", "emotional", "empt*", "encourag*", "energ*", "engag*", "enjoy*", "enrag*", "entertain*", "enthus*", "envie*", "envious", "excel*", "excit*", "excruciat*", "exhaust*", "fab", "fabulous*", "fail*", "fake", "fantastic*", "fatal*", "fatigu*", "favor*", "favour*", "fear", "feared", "fearful*", "fearing", "fearless*", "fears", "feroc*", "festiv*", "feud*", "fiery", "fiesta*", "fine", "fired", "flatter*", "flawless*", "flexib*", "flirt*", "flunk*", "foe*", "fond", "fondly", "fondness", "fool*", "forgave", "forgiv*", "fought", "frantic*", "freak*", "free", "freeb*", "freed*", "freeing", "freely", "freeness", "freer", "frees*", "friend*", "fright*", "frustrat*", "fuck", "fucked*", "fucker*", "fuckin*", "fucks", "fume*", "fuming", "fun", "funn*", "furious*", "fury", "geek*", "genero*", "gentle", "gentler", "gentlest", "gently", "giggl*", "giver*", "giving", "glad", "gladly", "glamor*", "glamour*", "gloom*", "glori*", "glory", "goddam*", "gorgeous*", "gossip*", "grace", "graced", "graceful*", "graces", "graci*", "grand", "grande*", "gratef*", "grati*", "grave*", "great", "grief", "griev*", "grim*", "grin", "grinn*", "grins", "grouch*", "grr*", "guilt*", "ha", "haha*", "handsom*", "happi*", "happy", "harass*", "hated", "hateful*", "hater*", "hates", "hating", "hatred", "hazy", "heartbreak*", "heartbroke*", "heartfelt", "heartless*", "heartwarm*", "heh*", "hellish", "helper*", "helpful*", "helping", "helpless*", "helps", "hesita*", "hilarious", "hoho*", "homesick*", "honour*", "hope", "hoped", "hopeful", "hopefully", "hopefulness", "hopeless*", "hopes", "hoping", "horr*", "hostil*", "hug", "hugg*", "hugs", "humiliat*", "humor*", "humour*", "hurra*", "idiot", "ignor*", "impatien*", "impersonal", "impolite*", "importan*", "impress*", "improve*", "improving", "inadequa*", "incentive*", "indecis*", "ineffect*", "inferior*", "inhib*", "innocen*", "insecur*", "insincer*", "inspir*", "insult*", "intell*", "interest*", "interrup*", "intimidat*", "invigor*", "irrational*", "irrita*", "isolat*", "jaded", "jealous*", "jerk", "jerked", "jerks", "joke*", "joking", "joll*", "joy*", "keen*", "kidding", "kind", "kindly", "kindn*", "kiss*", "laidback", "lame*", "laugh*", "lazie*", "lazy", "liabilit*", "libert*", "lied", "lies", "like", "likeab*", "liked", "likes", "liking", "livel*", "LMAO", "LOL", "lone*", "longing*", "lose", "loser*", "loses", "losing", "loss*", "lost", "lous*", "love", "loved", "lovely", "lover*", "loves", "loving*", "low*", "luck", "lucked", "lucki*", "luckless*", "lucks", "lucky", "ludicrous*", "lying", "mad", "maddening", "madder", "maddest", "madly", "magnific*", "maniac*", "masochis*", "melanchol*", "merit*", "merr*", "mess", "messy", "miser*", "miss", "missed", "misses", "missing", "mistak*", "mock", "mocked", "mocker*", "mocking", "mocks", "molest*", "mooch*", "mood", "moodi*", "moods", "moody", "moron*", "mourn*", "nag*", "nast*", "neat*", "needy", "neglect*", "nerd*", "nervous*", "neurotic*", "nice*", "numb*", "nurtur*", "obnoxious*", "obsess*", "offence*", "offens*", "ok", "okay", "okays", "oks", "openminded*", "openness", "opportun*", "optimal*", "optimi*", "original", "outgoing", "outrag*", "overwhelm*", "pained", "painf*", "paining", "painl*", "pains", "palatabl*", "panic*", "paradise", "paranoi*", "partie*", "party*", "passion*", "pathetic*", "peculiar*", "perfect*", "personal", "perver*", "pessimis*", "petrif*", "pettie*", "petty*", "phobi*", "piss*", "piti*", "pity*", "play", "played", "playful*", "playing", "plays", "pleasant*", "please*", "pleasing", "pleasur*", "poison*", "popular*", "positiv*", "prais*", "precious*", "pressur*", "prettie*", "pretty", "prick*", "pride", "privileg*", "prize*", "problem*", "profit*", "promis*", "protested", "protesting", "proud*", "puk*", "radian*", "rage*", "raging", "rancid*", "rape*", "raping", "rapist*", "readiness", "ready", "reassur*", "reek*", "regret*", "reject*", "relax*", "relief", "reliev*", "reluctan*", "remorse*", "repress*", "resent*", "resign*", "resolv*", "restless*", "revigor*", "reward*", "rich*", "ridicul*", "rigid*", "risk*", "ROFL", "romanc*", "romantic*", "rotten", "rude*", "sad", "sadde*", "sadly", "sadness", "sarcas*", "satisf*", "savage*", "scare*", "scaring", "scary", "sceptic*", "scream*", "screw*", "selfish*", "sentimental*", "serious", "seriously", "seriousness", "severe*", "shake*", "shaki*", "shaky", "share", "shared", "shares", "sharing", "shit*", "shock*", "shook", "shy*", "sigh", "sighed", "sighing", "sighs", "silli*", "silly", "sincer*", "skeptic*", "smart*", "smil*", "smother*", "smug*", "snob*", "sob", "sobbed", "sobbing", "sobs", "sociab*", "solemn*", "sorrow*", "sorry", "soulmate*", "special", "splend*", "stammer*", "stank", "startl*", "stink*", "strain*", "strange", "strength*", "stress*", "strong*", "struggl*", "stubborn*", "stunk", "stunned", "stuns", "stupid*", "stutter*", "succeed*", "success*", "suck", "sucked", "sucker*", "sucks", "sucky", "sunnier", "sunniest", "sunny", "sunshin*", "super", "superior*", "support", "supported", "supporter*", "supporting", "supportive*", "supports", "suprem*", "sure*", "surpris*", "suspicio*", "sweet", "sweetheart*", "sweetie*", "sweetly", "sweetness*", "sweets", "talent*", "tantrum*", "tears", "teas*", "tehe", "temper", "tempers", "tender*", "tense*", "tensing", "tension*", "terribl*", "terrific*", "terrified", "terrifies", "terrify", "terrifying", "terror*", "thank", "thanked", "thankf*", "thanks", "thief", "thieve*", "thoughtful*", "threat*", "thrill*", "ticked", "timid*", "toleran*", "tortur*", "tough*", "traged*", "tragic*", "tranquil*", "trauma*", "treasur*", "treat", "trembl*", "trick*", "trite", "triumph*", "trivi*", "troubl*", "TRUE", "trueness", "truer", "truest", "truly", "trust*", "truth*", "turmoil", "ugh", "ugl*", "unattractive", "uncertain*", "uncomfortabl*", "uncontrol*", "uneas*", "unfortunate*", "unfriendly", "ungrateful*", "unhapp*", "unimportant", "unimpress*", "unkind", "unlov*", "unpleasant", "unprotected", "unsavo*", "unsuccessful*", "unsure*", "unwelcom*", "upset*", "uptight*", "useful*", "useless*", "vain", "valuabl*", "valuing", "vanity", "vicious*", "vigor*", "vigour*", "villain*", "violat*", "virtuo*", "vital*", "vulnerab*", "vulture*", "warfare*", "warm*", "warred", "weak*", "wealth*", "weapon*", "weep*", "weird*", "welcom*", "well*", "wept", "whine*", "whining", "willing", "wimp*", "win", "winn*", "wins", "wisdom", "wise*", "witch", "woe*", "won", "wonderf*", "worr*", "worse*", "worship*", "worst", "wow*", "yay", "yays","yearn*","stench*")


MRau 的答案中使用的代码对我不起作用:

ind_stem <- grep("[*]", kw_Emo)
kw_stem  <- gsub("[*]", "", kw_Emo[ind_stem])
kw_word  <- kw_Emo[-ind_stem]
tweets <- strsplit(TestTweets[, "clean_text"], "\s+")

for (kws in kw_stem) {
  count_i <- unlist(lapply(tweets, function(x) length(grep(kws, x))))
  TestTweets <- cbind(TestTweets, count_i)
  colnames(TestTweets)[ncol(TestTweets)] <- paste0(kws, "*")
}
for (kww in kw_word) {
  count_i <- unlist(lapply(tweets, function(x) length(grep(paste0("^", kww, "$"), x))))
  TestTweets <- cbind(TestTweets, count_i)
  colnames(TestTweets)[ncol(TestTweets)] <- kww
}

所以首先我要去掉一些 for 循环:

ind_stem <- grep("[*]", kw_Emo)
kw_stem  <- gsub("[*]", "", kw_Emo[ind_stem])
kw_word  <- kw_Emo[-ind_stem]
tweets <- strsplit(TestTweets[, "clean_text"], "\s+")

我为单词和词干生成了不同的向量。 tweets 是单词向量列表 - strsplit 使用空白 space (\s+) 作为分隔符拆分字符串。

当涉及到words/stems的匹配时,两者都可以使用grep。默认情况下,它会找到包含给定模式的所有单词:

> grep("Abc", c("Abc", "Abcdef"))
[1] 1 2

但是如果你使用 ^$:

你可以获得 "exact" 匹配
> grep("^Abc$", c("Abc", "Abcdef"))
[1] 1

在您的代码中,您想查看 grep 输出的长度,例如将其附加到您的 data.frame:

for (kws in kw_stem) {
    count_i <- unlist(lapply(tweets, function(x) length(grep(kws, x))))
    TestTweets <- cbind(TestTweets, count_i)
    colnames(TestTweets)[ncol(TestTweets)] <- paste0(kws, "*")
}
for (kww in kw_word) {
    count_i <- unlist(lapply(tweets, function(x) length(grep(paste0("^", kww, "$"), x))))
    TestTweets <- cbind(TestTweets, count_i)
    colnames(TestTweets)[ncol(TestTweets)] <- kww
}

输出片段:

> TestTweets[19:20, c("clean_text", "boring")]
                                                                                                                    clean_text boring
19 im laughing at people who voted for brexit but are complaining about the exchange rate affecting their holiday\r\nremain      0
20                                                                           life is too short to wear boring shoes  brexit      1

当然,你可以进一步优化这段代码,或者根据你的问题等决定是否在第一个循环中使用grep(paste0("^", kws), x)而不是grep(kws, x)