Twitter Sentiment Analysis – Natural Language Processing With Python and NLTK p.20
[ad_1]
Finally, the moment we’ve all been waiting for and building up to. A live test!
We’ve decided to employ this classifier to the live Twitter stream, using Twitter’s API.
We’ve already covered how to do live Twitter API streaming, if you missed it, you can catch up here: http://pythonprogramming.net/twitter-api-streaming-tweets-python-tutorial/
After this, we output the findings to a text file, which we intend to graph!
Playlist link: https://www.youtube.com/watch?v=FLZvOKSCkxY&list=PLQVvvaa0QuDf2JswnfiGkliBInZnIC4HL&index=1
sample code: http://pythonprogramming.net
http://hkinsley.com
https://twitter.com/sentdex
http://sentdex.com
http://seaofbtc.com
Source
[ad_2]
Twitter apps requires an application to be completed for getting their API services. Right now I have to show project to my Teacher, and there is a deadline, I have been waiting for atleast a week now, but still my application for the API service is pending. Can someone lend me their consumer key and access token for a few days, later you can regenerate them. Help would be appreciated. Thank You!
that "return true except: return true" was a real gamer moment!
i am getting 401 error, i have already copy the c access, c token, and all that stuff but still getting error. Any suggestion ?
Thanks for the series… I just started NLP… After this series any recommendations on the way forward?
Traceback (most recent call last):
File "checkphp.py", line 110, in <module>
MNB_classifier.train(training_set)
File "C:UsersMuhammad Usman AkramAppDataLocalProgramsPythonPython36-32libsite-packagesnltkclassifyscikitlearn.py", line 117, in train
X = self._vectorizer.fit_transform(X)
File "C:UsersMuhammad Usman AkramAppDataLocalProgramsPythonPython36-32libsite-packagessklearnfeature_extractiondict_vectorizer.py", line 231, in fit_transform
return self._transform(X, fitting=True)
File "C:UsersMuhammad Usman AkramAppDataLocalProgramsPythonPython36-32libsite-packagessklearnfeature_extractiondict_vectorizer.py", line 173, in _transform
values.append(dtype(v))
MemoryError
can someone help me please…how can i remove this error
I'm getting a 401(unauthorized) with your code @6:40, even though i validated that all keys and secrets are correct . Do you ahve any idea what might cause this?
Hello from 2019!
For all those looking for sentiment_mod, go back a few tutorials, sentdex shows you how to make it and put it in your working directory. Same goes for all the .pickle files.
If you have installed tweepy and your IDE/atom/vscode is not recognising it, it may be that you have multiple python environments. Make sure you are using the one which you installed tweepy to.
Also, have to change the line:
tweet = all_data["text"]
to:
tweet = ascii(all_data["text"])
otherwise you get a UnicodeEncodeError
where can I get the sentiment_mod ? 🙁 thanks
Instead of using sentiment_mod module which dosent seem to exist anymore, try vader from NLTK
It's printing just 401. Does anybody know what is the problem and can i resolve it?
For those of you guys working on python 3.7, tweepy might be giving some error : def _start(self, async): invalid syntax. The solution for it is instead of pip install tweepy you need the download the latest version of api from git using the following code:
pip install git+https://github.com/tweepy/tweepy
Great Teacher…..legend
Need this nltk process with classifier, flowchart
can we do the same with youtube comments ????? i only got 9 days to finish a project help ps : i love your video's sentdex keep it up
need help, even after adding "tweet = ascii(all_data["text"])" or putting the code into "try -except" block I.m still getting the
–> No active exception to reraise… How to fix it? Im running this script on Anaconda.
can anyone help on it. " No module named 'tweepy'" even i have installed tweepy. This error message is showing.
Truly Awesome tutorial man!! But what if we want more than 3 classes (pos, neg, neutral) like anger, happy, sad etc. Do you know any dataset or library of this type so that i can train it to make a more beautiful graph!!!
please post one video on word sense disambigution using lesk algorithm in nltk of python to classify the web page data
how to collect nouns, verbs, prepositions, verbs from the DOm model of the web pages
how to scrape data of the 100 web pages at a time in python
@sentdex the classifiers are really negatively biased even with positive tweets how to make it more efficient
Hello,
anyone having a similar mistake:
statistics.StatisticsError: no unique mode; found 3 equally common values
It still saves a twitter-out.txt and all but there's still a bug.. ^^
#help please!
Hi Sentdex … I am trying to use NLP for Twitter Sentiment Analysis. As you know my dataset would be from twitter but what Features I should select and What Algo should i use instead for better accuracy.
Hii, Actually i was trying to execute the code but as an output instead of tweets I'm getting some random integer numbers not a single text , Even i also tried with different searches still getting the same .. Need help
Is anyone having featuressets.pickle file . I am struck with this and could not move further so please send the featuresets.pickle to my mailid : maran.b8@gmail.com
Hey. Can you do a tutorial for extracting Hyponyms and Hypernyms using Wordnet and NLTK. Would really appreciate it.
We did a video on a sentiment analysis of 'global warming' 'vrs 'climate change' https://youtu.be/_o0w2pev8Hg
"Error Reason" : Process of conversion of string into JSON takes time which make twitter to automatically disconnect your app.
The error can be solve by using following "code snippet":
def tweet2json(data):
tweet_data=json.loads(data)
print(tweet_data["text"])
class listener(StreamListener):
def on_data(self, data):
try:
_thread.start_new_thread(tweet2json(data),data)
except :
print (" ")
return(True)