无法标记数据框中的多列
Unable to tokenize multiple columns in a dataframe
我有一个 table,其中包含数字和字符串数据,但在不同的列中。 table 是对 Web 表单的回答,包含空单元格。我想在字符串列上使用文本处理。我不能删除带有空单元格的行,所以对于空字符串列,我用 aplhabet 'a'.
替换了 NaN
示例数据
colmun_name1 column_name2 column_name3 column_name4 classify
This is a cat This is a dog 1 2 0
This is a rat This is a mouse 45 32 1
a Good mouse 0 0 0
我使用以下代码来确保字符串列中的所有数据实际上都是字符串数据。
df2=df[[column_name1, column_name2]]
for i in range(0,len(df2)):
cell=df2.iloc[i]
cell=str(str)
df2.iloc[i]=cell
然后当我标记化时,出现错误
<ipython-input-64-24a99733ba19> in <module>
1 from nltk.tokenize import word_tokenize
----> 2 tokenized_word=word_tokenize(df2)
3 print(tokenized_word)
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/__init__.py in word_tokenize(text, language, preserve_line)
126 :type preserver_line: bool
127 """
--> 128 sentences = [text] if preserve_line else sent_tokenize(text, language)
129 return [token for sent in sentences
130 for token in _treebank_word_tokenizer.tokenize(sent)]
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/__init__.py in sent_tokenize(text, language)
93 """
94 tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
---> 95 return tokenizer.tokenize(text)
96
97 # Standard word tokenizer.
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in tokenize(self, text, realign_boundaries)
1239 Given a text, returns a list of the sentences in that text.
1240 """
-> 1241 return list(self.sentences_from_text(text, realign_boundaries))
1242
1243 def debug_decisions(self, text):
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in sentences_from_text(self, text, realign_boundaries)
1289 follows the period.
1290 """
-> 1291 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1292
1293 def _slices_from_text(self, text):
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in <listcomp>(.0)
1289 follows the period.
1290 """
-> 1291 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1292
1293 def _slices_from_text(self, text):
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in span_tokenize(self, text, realign_boundaries)
1279 if realign_boundaries:
1280 slices = self._realign_boundaries(text, slices)
-> 1281 for sl in slices:
1282 yield (sl.start, sl.stop)
1283
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _realign_boundaries(self, text, slices)
1320 """
1321 realign = 0
-> 1322 for sl1, sl2 in _pair_iter(slices):
1323 sl1 = slice(sl1.start + realign, sl1.stop)
1324 if not sl2:
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _pair_iter(it)
311 """
312 it = iter(it)
--> 313 prev = next(it)
314 for el in it:
315 yield (prev, el)
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _slices_from_text(self, text)
1293 def _slices_from_text(self, text):
1294 last_break = 0
-> 1295 for match in self._lang_vars.period_context_re().finditer(text):
1296 context = match.group() + match.group('after_tok')
1297 if self.text_contains_sentbreak(context):
TypeError: expected string or bytes-like object
我试过改变
df2=df[column_name1][column_name2]
但是我得到了同样的错误。
我该怎么办?
我认为你的错误很简单,将 cell=str(str)
替换为 cell=str(cell)
。
此外,您还需要正确的缩进,并且不能连续调用 str
,只能在单个单元格上调用。所以你的代码应该看起来像这个最小的例子
import pandas as pd
data_dict = {'a':[l for l in 'aakjnasnkdf']+[None],
'b':[l for l in 'aakjnasnkdf']+[1],
'c':range(12)}
df=pd.DataFrame(data_dict)
column_name1 ='a'
column_name2 = 'b'
df2=df.loc[:,[column_name1, column_name2]]
for i in range(0,len(df2)):
cell1, cell2 = df2.iloc[i]
cell1=str(cell1)
cell2 = str(cell2)
df2.iloc[i]=[cell1,cell2]
请参阅
TL;DR
# Creates a `colmun_name1_tokenized` column by
# taking the `colmun_name1` column and
# applying the word_tokenize function on every cell in the column.
>>> df['colmun_name1_tokenized'] = df['colmun_name1'].apply(word_tokenize)
>>> df.head()
colmun_name1 column_name2 column_name3 column_name4 classify \
0 This is a cat This is a dog 1 2 0
1 This is a rat This is a mouse 45 32 1
2 a Good mouse 0 0 0
colmun_name1_tokenized
0 [This, is, a, cat]
1 [This, is, a, rat]
2 [a]
如果您需要对不止一列进行标记化,并且您想用标记化的输出覆盖该列:
>>> with StringIO(file_str) as fin:
... df = pd.read_csv(fin, sep='\t')
...
>>> for col_name in ['colmun_name1', 'column_name2']:
... df[col_name] = df[col_name].apply(word_tokenize)
...
>>> df.head()
colmun_name1 column_name2 column_name3 column_name4 \
0 [This, is, a, cat] [This, is, a, dog] 1 2
1 [This, is, a, rat] [This, is, a, mouse] 45 32
2 [a] [Good, mouse] 0 0
classify
0 0
1 1
2 0
只是代码:
from io import StringIO
import pandas as pd
from nltk import word_tokenize
file_str = """colmun_name1\tcolumn_name2\tcolumn_name3\tcolumn_name4\tclassify
This is a cat\tThis is a dog\t1\t2\t0
This is a rat\tThis is a mouse\t45\t32\t1
a\tGood mouse\t0\t0\t0 """
with StringIO(file_str) as fin:
df = pd.read_csv(fin, sep='\t')
for col_name in ['colmun_name1', 'column_name2']:
df[col_name] = df[col_name].apply(word_tokenize)
我有一个 table,其中包含数字和字符串数据,但在不同的列中。 table 是对 Web 表单的回答,包含空单元格。我想在字符串列上使用文本处理。我不能删除带有空单元格的行,所以对于空字符串列,我用 aplhabet 'a'.
替换了 NaN示例数据
colmun_name1 column_name2 column_name3 column_name4 classify
This is a cat This is a dog 1 2 0
This is a rat This is a mouse 45 32 1
a Good mouse 0 0 0
我使用以下代码来确保字符串列中的所有数据实际上都是字符串数据。
df2=df[[column_name1, column_name2]]
for i in range(0,len(df2)):
cell=df2.iloc[i]
cell=str(str)
df2.iloc[i]=cell
然后当我标记化时,出现错误
<ipython-input-64-24a99733ba19> in <module>
1 from nltk.tokenize import word_tokenize
----> 2 tokenized_word=word_tokenize(df2)
3 print(tokenized_word)
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/__init__.py in word_tokenize(text, language, preserve_line)
126 :type preserver_line: bool
127 """
--> 128 sentences = [text] if preserve_line else sent_tokenize(text, language)
129 return [token for sent in sentences
130 for token in _treebank_word_tokenizer.tokenize(sent)]
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/__init__.py in sent_tokenize(text, language)
93 """
94 tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
---> 95 return tokenizer.tokenize(text)
96
97 # Standard word tokenizer.
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in tokenize(self, text, realign_boundaries)
1239 Given a text, returns a list of the sentences in that text.
1240 """
-> 1241 return list(self.sentences_from_text(text, realign_boundaries))
1242
1243 def debug_decisions(self, text):
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in sentences_from_text(self, text, realign_boundaries)
1289 follows the period.
1290 """
-> 1291 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1292
1293 def _slices_from_text(self, text):
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in <listcomp>(.0)
1289 follows the period.
1290 """
-> 1291 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1292
1293 def _slices_from_text(self, text):
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in span_tokenize(self, text, realign_boundaries)
1279 if realign_boundaries:
1280 slices = self._realign_boundaries(text, slices)
-> 1281 for sl in slices:
1282 yield (sl.start, sl.stop)
1283
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _realign_boundaries(self, text, slices)
1320 """
1321 realign = 0
-> 1322 for sl1, sl2 in _pair_iter(slices):
1323 sl1 = slice(sl1.start + realign, sl1.stop)
1324 if not sl2:
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _pair_iter(it)
311 """
312 it = iter(it)
--> 313 prev = next(it)
314 for el in it:
315 yield (prev, el)
/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _slices_from_text(self, text)
1293 def _slices_from_text(self, text):
1294 last_break = 0
-> 1295 for match in self._lang_vars.period_context_re().finditer(text):
1296 context = match.group() + match.group('after_tok')
1297 if self.text_contains_sentbreak(context):
TypeError: expected string or bytes-like object
我试过改变
df2=df[column_name1][column_name2]
但是我得到了同样的错误。
我该怎么办?
我认为你的错误很简单,将 cell=str(str)
替换为 cell=str(cell)
。
此外,您还需要正确的缩进,并且不能连续调用 str
,只能在单个单元格上调用。所以你的代码应该看起来像这个最小的例子
import pandas as pd
data_dict = {'a':[l for l in 'aakjnasnkdf']+[None],
'b':[l for l in 'aakjnasnkdf']+[1],
'c':range(12)}
df=pd.DataFrame(data_dict)
column_name1 ='a'
column_name2 = 'b'
df2=df.loc[:,[column_name1, column_name2]]
for i in range(0,len(df2)):
cell1, cell2 = df2.iloc[i]
cell1=str(cell1)
cell2 = str(cell2)
df2.iloc[i]=[cell1,cell2]
请参阅
TL;DR
# Creates a `colmun_name1_tokenized` column by
# taking the `colmun_name1` column and
# applying the word_tokenize function on every cell in the column.
>>> df['colmun_name1_tokenized'] = df['colmun_name1'].apply(word_tokenize)
>>> df.head()
colmun_name1 column_name2 column_name3 column_name4 classify \
0 This is a cat This is a dog 1 2 0
1 This is a rat This is a mouse 45 32 1
2 a Good mouse 0 0 0
colmun_name1_tokenized
0 [This, is, a, cat]
1 [This, is, a, rat]
2 [a]
如果您需要对不止一列进行标记化,并且您想用标记化的输出覆盖该列:
>>> with StringIO(file_str) as fin:
... df = pd.read_csv(fin, sep='\t')
...
>>> for col_name in ['colmun_name1', 'column_name2']:
... df[col_name] = df[col_name].apply(word_tokenize)
...
>>> df.head()
colmun_name1 column_name2 column_name3 column_name4 \
0 [This, is, a, cat] [This, is, a, dog] 1 2
1 [This, is, a, rat] [This, is, a, mouse] 45 32
2 [a] [Good, mouse] 0 0
classify
0 0
1 1
2 0
只是代码:
from io import StringIO
import pandas as pd
from nltk import word_tokenize
file_str = """colmun_name1\tcolumn_name2\tcolumn_name3\tcolumn_name4\tclassify
This is a cat\tThis is a dog\t1\t2\t0
This is a rat\tThis is a mouse\t45\t32\t1
a\tGood mouse\t0\t0\t0 """
with StringIO(file_str) as fin:
df = pd.read_csv(fin, sep='\t')
for col_name in ['colmun_name1', 'column_name2']:
df[col_name] = df[col_name].apply(word_tokenize)