In my experience, choice of tools for NLP mostly depends on concrete tasks. For example, for named entity recognition (NER) there's nice  Java library called GATE [1]. It allows you to annotate your text with special marks (e.g. part of speech tags, "time", "name", etc.) and write regex-like rules to capture even very complicated patterns. On other hand, Stanford NLP Parser [2] gives unique possibility to extract sentense structure, feature, that is not available in any other library known to me. And in Python world there's NLTK, NumPy, SciKit Learn, easy integration with TreeTagger [3] and super cool ecosystem for statistical text analysis. Each of these tools and their combination has its pros and cons, so final choice really depends on your specific needs and personal preferences.

As for Spark (and distributed computations in general), most of the NLP tasks may be performed locally on workers (e.g. you don't need 1Tb dataset to find out part of speech tags for particular sentense - you need only this specific sentence and maybe some little context). Some tasks, however, do require entire dataset at once. Most popular of them, such as KMeans clustering or collaborative filtering, are already implemented in MLlib. But it's always worth to check for specific algos you may need before taking a final decision.

Let me know if you need advice on specific NLP or ML tasks.


Best Regards,

On Wed, Mar 12, 2014 at 10:12 PM, Brian O'Neill <> wrote:

Please let us know how you make out.  We have NLP  requirements on the horizon.  I’ve used NLTK before, but never on Spark.  I’d love to hear if that works out for you.



Brian O'Neill

Chief Technology Officer

Health Market Science

The Science of Better Results

2700 Horizon Drive  King of Prussia, PA  19406

M: 215.588.6024 @boneill42

This information transmitted in this email message is for the intended recipient only and may contain confidential and/or privileged material. If you received this email in error and are not the intended recipient, or the person responsible to deliver it to the intended recipient, please contact the sender at the email above and delete this email and any attachments and destroy any copies thereof. Any review, retransmission, dissemination, copying or other use of, or taking any action in reliance upon, this information by persons or entities other than the intended recipient is strictly prohibited.


From: Mayur Rustagi <>
Reply-To: <>
Date: Wednesday, March 12, 2014 at 2:38 PM
To: <>
Cc: "" <>
Subject: Re: NLP with Spark

Would love to know if somebody has tried this, only possible problem I can forsee is non-serializable libraries, else no reason it should not work. 

On Wed, Mar 12, 2014 at 11:10 AM, shankark <> wrote:
(apologies if this was sent out multiple times before)

We are about to start a large-scale text-processing research project and are debating between two alternatives for our cluster -- Spark and Hadoop. I've researched possibilities of using NLTK with Hadoop and see that there's some precedent ( I wanted to know how easy it might be to use NLTK with pyspark, or if scalanlp is mature enough to be used with the Scala API for Spark/mllib.


View this message in context: NLP with Spark
Sent from the Apache Spark User List mailing list archive at