以编程方式安装NLTK语料库/模型,即没有GUI下载器?

我的项目使用NLTK。 如何列出项目的语料库和模型要求,以便自动安装? 我不想单击nltk.download() GUI,逐个安装软件包。

此外,任何方式冻结相同的要求列表(如点击pip freeze )?

NLTK网站列出了一个命令行界面,用于下载页面底部的包和集合:

http://www.nltk.org/data

命令行的用法根据您使用的Python版本而有所不同,但在我的Python2.6安装中,我注意到我错过了“spanish_grammar”模型,并且工作正常:

 python -m nltk.downloader spanish_grammars 

你提到列出了项目的语料库和模型的要求,虽然我不知道如何自动做到这一点,我想我至less会分享这个。

除了已经提到的命令行选项之外,您还可以通过在download()函数中添加一个参数来以编程方式在您的Python脚本中安装NLTK数据。

请参阅help(nltk.download)文本,具体为:

 Individual packages can be downloaded by calling the ``download()`` function with a single argument, giving the package identifier for the package that should be downloaded: >>> download('treebank') # doctest: +SKIP [nltk_data] Downloading package 'treebank'... [nltk_data] Unzipping corpora/treebank.zip. 

我可以确认这一次适用于下载一个包,或者当传递一个listtuple

 >>> import nltk >>> nltk.download('wordnet') [nltk_data] Downloading package 'wordnet' to [nltk_data] C:\Users\_my-username_\AppData\Roaming\nltk_data... [nltk_data] Unzipping corpora\wordnet.zip. True 

您也可以尝试下载已经下载的软件包,而不会有任何问题:

 >>> nltk.download('wordnet') [nltk_data] Downloading package 'wordnet' to [nltk_data] C:\Users\_my-username_\AppData\Roaming\nltk_data... [nltk_data] Package wordnet is already up-to-date! True 

此外,它看起来该函数返回一个布尔值,您可以用来查看下载是否成功:

 >>> nltk.download('not-a-real-name') [nltk_data] Error loading not-a-real-name: Package 'not-a-real-name' [nltk_data] not found in index False 

要安装所有的NLTK语料库和模型:

 python -m nltk.downloader popular 

或者,在Linux上,您可以使用:

 sudo python -m nltk.downloader -d /usr/local/share/nltk_data popular 

您也可以通过命令行浏览语料库和模型:

 mlee@server:/scratch/jjylee/tests$ sudo python -m nltk.downloader [sudo] password for jjylee: NLTK Downloader --------------------------------------------------------------------------- d) Download l) List u) Update c) Config h) Help q) Quit --------------------------------------------------------------------------- Downloader> d Download which package (l=list; x=cancel)? Identifier> l Packages: [ ] averaged_perceptron_tagger_ru Averaged Perceptron Tagger (Russian) [ ] basque_grammars..... Grammars for Basque [ ] bllip_wsj_no_aux.... BLLIP Parser: WSJ Model [ ] book_grammars....... Grammars from NLTK Book [ ] cess_esp............ CESS-ESP Treebank [ ] chat80.............. Chat-80 Data Files [ ] city_database....... City Database [ ] cmudict............. The Carnegie Mellon Pronouncing Dictionary (0.6) [ ] comparative_sentences Comparative Sentence Dataset [ ] comtrans............ ComTrans Corpus Sample [ ] conll2000........... CONLL 2000 Chunking Corpus [ ] conll2002........... CONLL 2002 Named Entity Recognition Corpus [ ] conll2007........... Dependency Treebanks from CoNLL 2007 (Catalan and Basque Subset) [ ] crubadan............ Crubadan Corpus [ ] dependency_treebank. Dependency Parsed Treebank [ ] europarl_raw........ Sample European Parliament Proceedings Parallel Corpus [ ] floresta............ Portuguese Treebank [ ] framenet_v15........ FrameNet 1.5 Hit Enter to continue: [ ] framenet_v17........ FrameNet 1.7 [ ] gazetteers.......... Gazeteer Lists [ ] genesis............. Genesis Corpus [ ] gutenberg........... Project Gutenberg Selections [ ] hmm_treebank_pos_tagger Treebank Part of Speech Tagger (HMM) [ ] ieer................ NIST IE-ER DATA SAMPLE [ ] inaugural........... C-Span Inaugural Address Corpus [ ] indian.............. Indian Language POS-Tagged Corpus [ ] jeita............... JEITA Public Morphologically Tagged Corpus (in ChaSen format) [ ] kimmo............... PC-KIMMO Data Files [ ] knbc................ KNB Corpus (Annotated blog corpus) [ ] large_grammars...... Large context-free and feature-based grammars for parser comparison [ ] lin_thesaurus....... Lin's Dependency Thesaurus [ ] mac_morpho.......... MAC-MORPHO: Brazilian Portuguese news text with part-of-speech tags [ ] machado............. Machado de Assis -- Obra Completa [ ] masc_tagged......... MASC Tagged Corpus [ ] maxent_ne_chunker... ACE Named Entity Chunker (Maximum entropy) [ ] moses_sample........ Moses Sample Models Hit Enter to continue: x Download which package (l=list; x=cancel)? Identifier> conll2002 Downloading package conll2002 to /afs/mit.edu/u/m/mlee/nltk_data... Unzipping corpora/conll2002.zip. --------------------------------------------------------------------------- d) Download l) List u) Update c) Config h) Help q) Quit --------------------------------------------------------------------------- Downloader> 

我设法使用以下代码在自定义目录中安装语料库和模型:

 import nltk nltk.download(info_or_id="popular", download_dir="/path/to/dir") nltk.data.path.append("/path/to/dir") 

这将在“ /path/to/dir ”中安装“ all ”语料库/模型,并让知道NLTK在哪里查找( data.path.append )。

您不能“冻结”需求文件中的数据,但是您可以将此代码添加到您的__init__除了检查代码是否已经存在。