Building a NLP App with Streamlit,Spacy and Python (NER,Sentiment Analyser,Summarizer)


In this tutorial we will be build a Natural Language Processing App with Streamlit, Spacy and Python for named entity recog, sentiment analysis and text summarization.

# Installation
pip install streamlit
Official Docs:

Check out the Free Course on- Learn Julia Fundamentals

Udemy Course: Awesome Tools For NLP

Written Tutorial:
Code on Github:

== Great Python Books For Mastering Data Science & ML ==
Python Cookbook:
Python For Data Analysis :
Python Data Science HandBook:
Python Machine Learning by Sebastian Raschka:
Hands On Machine Learning with Scikit-Learn & TensorFlow:
Mastering ML with Scikitlearn:
Monetizing ML:
Building Machine Learning Systems With Python:

If you liked the video don’t forget to leave a like or subscribe.
If you need any help just message me in the comments, you never know it might help someone else too.
J-Secur1ty JCharisTech

==Get The Data Science Prime App==
@ Playstore :

==Need To Build A Data Science/ML App Check out this gig==




Comment List

  • Lead Learner
    December 22, 2020

    Hi, tanks for the tutorial. I have trained a model for sentiment analysis. I was wondering if I could use a model within streamlit?

  • Lead Learner
    December 22, 2020

    Amazing video. How does the st.cache above entity_analyzer function improve performance?

  • Lead Learner
    December 22, 2020

    Good job,but while I followed the NER part, it occured the error as below.Do you know how to fix it?Thanks in advance.
    AttributeError: 'spacy.tokens.token.Token' object has no attribute 'label_'


    File "", line 322, in _run_script

    exec(code, module.__dict__)

    File "C:UserssilenOneDrive – 清華大學", line 56, in <module>


    File "C:UserssilenOneDrive – 清華大學", line 44, in main

    nlp_result = entity_analyzer(message)

    File "C:UserssilenOneDrive – 清華大學", line 20, in entity_analyzer

    entities = [(entity.text,entity.label_) for entity in docx]

    File "C:UserssilenOneDrive – 清華大學", line 20, in <listcomp>

    entities = [(entity.text,entity.label_) for entity in docx]

  • Lead Learner
    December 22, 2020

    Hey Charis, I am trying to import textblob but it gives me error. I have started using VScode recently and it is not clear on how should I make the dependencies available.

    import nltk — works fine
    import textblob — throws below error

    Traceback (most recent call last):

    File "", line 4, in <module>

    import textblob

    File "", line 2, in <module>

    from .blob import TextBlob, Word, Sentence, Blobber, WordList

    File "", line 35, in <module>

    from textblob.base import (BaseNPExtractor, BaseTagger, BaseTokenizer,

    File "", line 44, in <module>

    class BaseTokenizer(with_metaclass(ABCMeta), nltk.tokenize.api.TokenizerI):

    AttributeError: module 'nltk' has no attribute 'tokenize'

    Can someone please help

  • Lead Learner
    December 22, 2020

    How can we use this in Jupyter Notebook or Colab??

  • Lead Learner
    December 22, 2020

    can you please send me a supporting document of what you have done in the video. I see that you have attached some resources but my question is, have you compiled a supporting document? If yes, please send it to me. if not, please send me the books' chapters and sections in which I can find and follow what you have done. I want to understand how you completed this task because I was struggling with it and I still am. The code is working, yes but a working code does not necessarily make you a novice.

  • Lead Learner
    December 22, 2020

    I created a new virtual environment and I was able to execute 'nlp = spacy.load('en')

    Started everything from scratch and then ran into this error(@14:17 video):
    ValueError: spacy.syntax.nn_parser.Parser size changed, may indicate binary incompatibility. Expected 72 from C header, got 64 from PyObject
    File "/opt/anaconda3/lib/python3.7/site-packages/streamlit/", line 311, in _run_script
    exec(code, module.__dict__)
    File "/Users/sauce_god/Documents/programs/python/NLPstreamlit/", line 62, in <module>
    File "/Users/sauce_god/Documents/programs/python/NLPstreamlit/", line 33, in main
    nlp_result = text_analyzer(message)
    File "/Users/sauce_god/Documents/programs/python/NLPstreamlit/", line 15, in text_analyzer
    nlp = spacy.load('en')
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 21, in load
    return util.load_model(name, **overrides)
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 112, in load_model
    return load_model_from_link(name, **overrides)
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 129, in load_model_from_link
    return cls.load(**overrides)
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/data/en/", line 12, in load
    return load_model_from_init_py(__file__, **overrides)
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 173, in load_model_from_init_py
    return load_model_from_path(data_path, meta, **overrides)
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 143, in load_model_from_path
    cls = get_lang_class(meta['lang'])
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 50, in get_lang_class
    module = importlib.import_module('.lang.%s' % lang, 'spacy')
    File "/opt/anaconda3/lib/python3.7/importlib/", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
    File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
    File "<frozen importlib._bootstrap>", line 983, in _find_and_load
    File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
    File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
    File "<frozen importlib._bootstrap_external>", line 728, in exec_module
    File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/lang/en/", line 15, in <module>
    from …language import Language
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 18, in <module>
    from .pipeline import DependencyParser, Tensorizer, Tagger, EntityRecognizer
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/pipeline/", line 4, in <module>
    from .pipes import Tagger, DependencyParser, EntityRecognizer, EntityLinker
    File "pipes.pyx", line 1, in init spacy.pipeline.pipes

  • Lead Learner
    December 22, 2020

    OSError: [E050] Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.
    File "/opt/anaconda3/lib/python3.7/site-packages/streamlit/", line 311, in _run_script
    exec(code, module.__dict__)
    File "/Users/sauce_god/Documents/Programs/Streamlit/nlp-app/", line 114, in <module>
    File "/Users/sauce_god/Documents/Programs/Streamlit/nlp-app/", line 59, in main
    nlp_result = text_analyzer(message)
    File "/opt/anaconda3/lib/python3.7/site-packages/streamlit/", line 564, in wrapped_func
    return get_or_set_cache()
    File "/opt/anaconda3/lib/python3.7/site-packages/streamlit/", line 544, in get_or_set_cache
    return_value = func(*args, **kwargs)
    File "/Users/sauce_god/Documents/Programs/Streamlit/nlp-app/", line 29, in text_analyzer
    nlp = spacy.load('en')
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 30, in load
    return cli_info(model, markdown, silent)
    File "/opt/anaconda3/lib/python3.7/site-packages/spacy/", line 169, in load_model
    data_dir = '%s_%s-%s' % (meta['lang'], meta['name'], meta['version'])

  • Lead Learner
    December 22, 2020

    17:28 my lemmatizer does not "lemmatize" 'coding' very interesting

  • Lead Learner
    December 22, 2020

    Awesome. Thanks.
    Can I open two differents Stremlit's instances at the same time on the same PC? For example:
    $ streamlit run –> http://localhost:8501
    $ streamlit run –> http://localhost:8502 (for example)
    In differents ports on the web browser

  • Lead Learner
    December 22, 2020

    Thanks a lot J. Are there aditional parts of streamlit which could be interesting which you have not shown yet? Hope to see more from you soon. Great work!

  • Lead Learner
    December 22, 2020

    Great work, have you built one for topic modelling? Also how can i upload a file for the same analysis?

  • Lead Learner
    December 22, 2020

    How to apply CSV file for sentiment analysis.csv file have number of reviews .

  • Lead Learner
    December 22, 2020

    We give the single review for sentiment
    Please make video how to upload .CSV(bulk) file have reviews.analyse how much +ve or -ve on CSV file.

Write a comment