如何在 NLP 中的 TweetTokenizer 步骤中删除标点符号和数字?

How to remove punctuation and numbers during TweetTokenizer step in NLP?

我对 NLP 比较陌生,所以请多多包涵。我 有特朗普上任以来推文的完整列表,我正在标记文本以分析内容。

我正在使用 python 中 nltk 库中的 TweetTokenizer,我正在尝试对除数字和标点符号之外的所有内容进行标记化。问题是我的代码删除了除一个之外的所有标记。

我试过使用 .isalpha() 方法,但这没有用,我认为这应该只适用于由字母组成的字符串。

#Create a content from the tweets
text= non_re['text']
#Make all text in lowercase
low_txt= [l.lower() for l in text]

#Iteratively tokenize the tweets
TokTweet= TweetTokenizer()
tokens= [TokTweet.tokenize(t) for t in low_txt
        if t.isalpha()]

我的输出只是一个标记。 如果我删除 if t.isalpha() 语句,那么我会得到所有标记,包括数字和标点符号,这表明 isalpha() 是过度修剪的罪魁祸首。

我想要的是一种从没有标点符号和数字的推文文本中获取标记的方法。 感谢您的帮助!

试试下面的方法:

import string
import re
import nltk
from nltk.tokenize import TweetTokenizer

tweet = "first think another Disney movie, might good, it's kids movie. watch it, can't help enjoy it. ages love movie. first saw movie 10 8 years later still love it! Danny Glover superb could play"

def clean_text(text):
    # remove numbers
    text_nonum = re.sub(r'\d+', '', text)
    # remove punctuations and convert characters to lower case
    text_nopunct = "".join([char.lower() for char in text_nonum if char not in string.punctuation]) 
    # substitute multiple whitespace with single whitespace
    # Also, removes leading and trailing whitespaces
    text_no_doublespace = re.sub('\s+', ' ', text_nopunct).strip()
    return text_no_doublespace

cleaned_tweet = clean_text(tweet)
tt = TweetTokenizer()
print(tt.tokenize(cleaned_tweet))

输出:

['first', 'think', 'another', 'disney', 'movie', 'might', 'good', 'its', 'kids', 'movie', 'watch', 'it', 'cant', 'help', 'enjoy', 'it', 'ages', 'love', 'movie', 'first', 'saw', 'movie', 'years', 'later', 'still', 'love', 'it', 'danny', 'glover', 'superb', 'could', 'play']
# Function for removing Punctuation from Text and It gives total no.of punctuation removed also
# Input: Function takes Existing fie name and New file name as string i.e 'existingFileName.txt' and 'newFileName.txt'
# Return: It returns two things Punctuation Free File opened in read mode and a punctuation count variable.



def removePunctuation(tokenizeSampleText, newFileName):

    from nltk.tokenize import word_tokenize
    existingFile = open(tokenizeSampleText, 'r')
    read_existingFile = existingFile.read()
    tokenize_existingFile = word_tokenize(read_existingFile)

    puncRemovedFile = open(newFileName, 'w+')

    import string
    stringPun = list(string.punctuation)
    count_pun = 0

    for word in tokenize_existingFile:
        if word in stringPun:
            count_pun += 1
        else:
            word = word + ' '
            puncRemovedFile.write(''.join(word))

    existingFile.close()
    puncRemovedFile.close()

    return open(newFileName, 'r'), count_pun

punRemoved, punCount = removePunctuation('Macbeth.txt', 'Macbeth-punctuationRemoved.txt')
print(f'Total Punctuation : {punCount}')
punRemoved.read()