skip to Main Content

I don’t know if it’s possible, but is there a way to get the date/time of each tweet that comes through Twitter’s Filtered Stream?

I’m using sample code provided in Twitter’s API V2 documentation for "filtered stream" tweets as a base. I have edited it so that I can search for a key word, and I am able to just get the text of the tweets, but I also want to get the date/time of the tweets. I can’t seem to be able to do it.

My goal is to be able to count the number of tweets created every 15min that contains my word/s of interest, but I can’t do this without having the time the tweets were created.

Here is my code so far:

import requests
import os
import json
import config
import preprocessor as p
from csv import writer


# To set your enviornment variables in your terminal run the following line:
# export 'BEARER_TOKEN'='<your_bearer_token>'
bearer_token = config.BEARER_TOKEN


def bearer_oauth(r):
    """
    Method required by bearer token authentication.
    """

    r.headers["Authorization"] = f"Bearer {bearer_token}"
    r.headers["User-Agent"] = "v2FilteredStreamPython"
    return r


def get_rules():
    response = requests.get(
        "https://api.twitter.com/2/tweets/search/stream/rules", auth=bearer_oauth
    )
    if response.status_code != 200:
        raise Exception(
            "Cannot get rules (HTTP {}): {}".format(response.status_code, response.text, response.created_at)
        )
    print(json.dumps(response.json()))
    return response.json()


def delete_all_rules(rules):
    if rules is None or "data" not in rules:
        return None

    ids = list(map(lambda rule: rule["id"], rules["data"]))
    payload = {"delete": {"ids": ids}}
    response = requests.post(
        "https://api.twitter.com/2/tweets/search/stream/rules",
        auth=bearer_oauth,
        json=payload
    )
    if response.status_code != 200:
        raise Exception(
            "Cannot delete rules (HTTP {}): {}".format(
                response.status_code, response.text
            )
        )
    print(json.dumps(response.json()))


def set_rules(delete):
    # You can adjust the rules if needed
    sample_rules = [
        {"value": "(AVAX OR #AVAX OR AVAX/USDT OR AVAXUSDT OR AVAXUSD OR AVALANCHEAVAX OR #AVALANCHEAVAX) lang:en -giveaway -jackpot -jackpots -collectable -collectible -collection"},#-passive -prize -prizes -giveaways -tag -YouTube -dickhead -rank -ranked -rewards -link -visit -game -promotion -promote -vote -colony -retweet -Regards -discord -jizz -tits -join -airdrop -earn -retweets -contest -shib -shiba -is:retweet -is:reply -has:links"},
     #   {"value": "cat has:images -grumpy", "tag": "cat pictures"},
    ]
    payload = {"add": sample_rules}
    response = requests.post(
        "https://api.twitter.com/2/tweets/search/stream/rules",
        auth=bearer_oauth,
        json=payload,
    )
    if response.status_code != 201:
        raise Exception(
            "Cannot add rules (HTTP {}): {}".format(response.status_code, response.text)
        )
    print(json.dumps(response.json()))


def get_stream(set):
    response = requests.get(
        "https://api.twitter.com/2/tweets/search/stream", auth=bearer_oauth, stream=True,
    )
    print(response.status_code)
    if response.status_code != 200:
        raise Exception(
            "Cannot get stream (HTTP {}): {}".format(
                response.status_code, response.text
            )
        )
    for response_line in response.iter_lines():
        if response_line:
            json_response = json.loads(response_line)
           # print(json.dumps(json_response, indent=4, sort_keys=True))
            tweet = json_response['data']['text']
            tweet = p.clean(tweet)
            print(tweet)
            tweetList = [tweet]
            
            with open('avaxdata.csv', 'a+', newline='') as write_obj:
                csv_writer = writer(write_obj)
                csv_writer.writerow(tweetList)

def main():
    rules = get_rules()
    delete = delete_all_rules(rules)
    set = set_rules(delete)
    get_stream(set)


if __name__ == "__main__":
    main()```

2

Answers


  1. Chosen as BEST ANSWER

    Big shout out to Alan Lee for steering me in the right direction. I included the link he provided under the get_stream function:

    def get_stream(set):
        response = requests.get(
            "https://api.twitter.com/2/tweets/search/stream?tweet.fields=created_at,text", auth=bearer_oauth, stream=True,
        )
    

    Notice I also added the 'text' parameter to the end because I wanted both the time the tweet was created, and what the actual tweet said.

    The full get_stream function which allows me to get the date and text, and have it cleaned, and stored into a csv in two separate columns (text, date) is as below:

    NOTE: Make sure to install 'tweet-preprocessor' and NOT 'preprocessor'... they are different, and the latter one won't work with this code.

    import preprocessor as p
    from csv import writer
    
    def get_stream(set):
        response = requests.get(
            "https://api.twitter.com/2/tweets/search/stream?tweet.fields=created_at,text", auth=bearer_oauth, stream=True,
        )
        print(response.status_code)
        if response.status_code != 200:
            raise Exception(
                "Cannot get stream (HTTP {}): {}".format(
                    response.status_code, response.text
                )
            )
        for response_line in response.iter_lines():
            if response_line:
                json_response = json.loads(response_line)
                tweet_text = json_response['data']['text']
                tweet_created_at = json_response['data']['created_at']
                tweet_text = p.clean(tweet_text)
                
                print(tweet_text)
                print(tweet_created_at)
                
                tweetList = [(tweet_text),(tweet_created_at)]
                
                with open('avaxdata.csv', 'a+', newline='') as write_obj:
                    csv_writer = writer(write_obj)
                    csv_writer.writerow(tweetList)
    

    I hope someone out there finds this useful! Took me a lot longer than it should have just to get this working haha!


  2. Yes, you can add on additional field parameters to the endpoint. To get the created at times for Tweets, try https://api.twitter.com/2/tweets/search/stream?tweet.fields=created_at.
    For full list of optional params check out the API reference here

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search