如何 运行 Stanford CoreNLP 在 Google Colab 上进行词形还原?

How to run Stanford CoreNLP for lemmatization on Google Colab?

有一个类似的问题,但是 google colab 从那时起变化很大,我想知道如何在 Google Colab 上使用 Stanford CoreNLP,专门用于词形还原。

预期答案:

使用代码:

!pip install stanfordnlp
import stanfordnlp
stanfordnlp.download("es")
nlp = stanfordnlp.Pipeline(processors='tokenize,mwt,pos,lemma')
doc = nlp("Barack Obama was born in Hawaii.")
print(*[f'word: {word.text+" "}\tlemma: {word.lemma}' for sent in doc.sentences for word in sent.words], sep='\n')

%tb

------------
Loading: tokenize
With settings: 
{'model_path': '/root/stanfordnlp_resources/en_ewt_models/en_ewt_tokenizer.pt', 'lang': 'en', 'shorthand': 'en_ewt', 'mode': 'predict'}
Cannot load model from /root/stanfordnlp_resources/en_ewt_models/en_ewt_tokenizer.pt
An exception has occurred, use %tb to see the full traceback.

SystemExit: 1

/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2890: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
  warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)

将考虑任何改进问题的建议

也许最好使用新的 StanfordNLP 而不是旧的 CoreNLP

!pip install stanfordnlp
import stanfordnlp
stanfordnlp.download("en")
nlp = stanfordnlp.Pipeline(processors='tokenize,mwt,pos,lemma')
doc = nlp("Barack Obama was born in Hawaii.")
print(*[f'word: {word.text+" "}\tlemma: {word.lemma}' for sent in doc.sentences for word in sent.words], sep='\n')

你会得到这个输出

word: Barack    lemma: Barack
word: Obama     lemma: Obama
word: was   lemma: be
word: born  lemma: bear
word: in    lemma: in
word: Hawaii    lemma: Hawaii
word: .     lemma: .

这是一个 example notebook