
Using requests to connect
Firstly, we include all necessary libraries. We have added the json library to be able to parse easily the outputs of the Twitter API and urllib.parse, which encodes a query string into a proper request URL:
import requests from requests_oauthlib import OAuth1 import json from urllib.parse import urlparse
In the first place, we define parameters that will be used to establish connections with the Twitter API and we create an OAuth client connection:
params = { 'app_key':'YOUR_APP_KEY',
'app_secret':'YOUR_APP_SECRET',
'oauth_token':'USER_OAUTH_TOKEN',
'oauth_token_secret':'USER_OAUTH_TOKEN_SECRET' } auth = OAuth1( params['app_key'],
params['app_secret'],
params['oauth_token'],
params['oauth_token_secret'] )
Firstly, we encode our query. We have chosen to search for three car brands: BMW, Mercedes, and Audi:
q = urlparse('BMW OR Mercedes OR Audi')
Then we execute a search request using our query and OAuth client:
results = requests.get(url_rest, auth=auth)
The request returned a list of tweets with all the meta information. We will convert it to JSON and print the content of each tweet we find under the text field.
for tweet in results.json(): print (tweet['text'])
Similarly, we make a request to the Streaming API to get all recent tweets.
stream_results = requests.get(url_streaming, stream=True)
We keep iterating through all the lines that are being returned.
for line in stream_results.iter_lines(): if line: decoded_line = line.decode('utf-8') print(json.loads(decoded_line)['text'])
If the line exists we decode it to UTF-8 to make sure we manage the encoding issues and then we print a field text from JSON.
We will use both methods to get and pre-process data in a practical example in Chapter 4, Analyzing Twitter Using Sentiment Analysis and Entity Recognition.