I am doing a historical search of Twitter data using Twitter’s Sandbox API. I am using the TwitterAPI package on Python. Sandbox allows a total of 50 requests of the API per month.
I have this code below, which is collecting the data fine, but has only conducted one request, meaning I only have 100 tweets. I’m wondering what code I can insert so I can make multiple requests in the one go. I am hoping to now use all of my 50 requests for this month using this code.
Current code:
from TwitterAPI import TwitterAPI
import csv
SEARCH_TERM = 'my-search-term-here'
PRODUCT = 'fullarchive'
LABEL = 'here-goes-my-dev-env'
api = TwitterAPI("consumer_key",
"consumer_secret",
"access_token_key",
"access_token_secret")
r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL),
{'query':SEARCH_TERM,
'fromDate':'201811151334',
'toDate':'201811161500'
}
)
csvFile = open('filename.csv', 'a')
csvWriter = csv.writer(csvFile)
for item in r:
csvWriter.writerow([item['created_at'],item['user']. ['screen_name'], item['text']
2
Answers
I’m not sure I really understand your problem.
You are making a request that potentially matches thousands of tweets, right now you access the first 100 but you now want the next 100 is that correct?
If so, you should know that Twitter API is based on a paging system. It means that if your request matches 300 tweets, you can access 3 pages of 100 tweets.
To do so, rely on the paging API of TwitterAPI:
http://geduldig.github.io/TwitterAPI/paging.html
https://geduldig.github.io/TwitterAPI/twitterpager.html
Be aware that there is another library, ‘tweepy’, that can do the exact same thing. I find it more convenient but this is personal.
As machinus says, you can use the TwitterPager utility. In your code, I think you would need to change just this line of code:
To this: