如何在句子级别对段落中的文本进行一次热编码?

how to do one hot encoding for text in a paragraph at the sentence level?

我将句子存储在文本文件中,如下所示。

radiologicalreport =1.  MDCT OF THE CHEST   History: A 58-year-old male, known case lung s/p LUL segmentectomy.  Technique: Plain and enhanced-MPR CT chest is performed using 2 mm interval.  Previous study: 03/03/2018 (other hospital)  Findings:   Lung parenchyma: The study reveals evidence of apicoposterior segmentectomy of LUL showing soft tissue thickening adjacent surgical bed at LUL, possibly post operation.

我的最终目标是应用 LDA 将每个句子分类到一个主题。在此之前,我想对文本进行一次热编码。我面临的问题是我想在 numpy 数组中对每个句子进行一次热编码,以便能够将其输入 LDA。如果我想对全文进行一次热编码,我可以使用这两行轻松地完成。

sent_text = nltk.sent_tokenize(text)
hot_encode=pd.Series(sent_text).str.get_dummies(' ')

但是,我的目标是在 numpy 数组中对每个句子进行一次热编码。所以,我尝试下面的代码。

from numpy import array
from numpy import argmax
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
import nltk
import pandas as pd


from nltk.tokenize import TweetTokenizer, sent_tokenize

with open('radiologicalreport.txt', 'r') as myfile:
report=myfile.read().replace('\n', '') 
tokenizer_words = TweetTokenizer()
tokens_sentences = [tokenizer_words.tokenize(t) for t in 
nltk.sent_tokenize(report)]
tokens_np = array(tokens_sentences)

label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(tokens_np)

# binary encode
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)

onehot_encoded = onehot_encoder.fit_transform(integer_encoded)

我在这一行收到错误 "TypeError: unhashable type: 'list'"

integer_encoded = label_encoder.fit_transform(tokens_np)

因此无法继续进行。 此外,我的 tokens_sentences 看起来像 图像中所示。

请帮忙!!

您正在尝试使用 fit_transform 将标签转换为数值(在您的示例中,标签是单词列表 -- tokens_sentences)。

但非数字标签只有在 可散列且可比较 时才能转换(参见 the docs)。列表不可散列,但您可以将它们转换为元组:

tokens_np = array([tuple(s) for s in tokens_sentences]) 
# also ok: tokens_np = [tuple(s) for s in tokens_sentences]

然后你可以把你的句子编码成integer_encoded

label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(tokens_np)