skip to Main Content

I wrote this script that returns a list of ads with their stats but apprently I’m getting only insights for active ads and not paused ones – For paused ones, I’m just getting the campaign name and its id !

I tried using filtering like below but it’s not working:

first = "https://graph.facebook.com/v3.2/act_105433210/campaigns?filtering=[{'field':'effective_status','operator':'IN','value':['PAUSED']}]&fields=created_time,name,effective_status,insights{spend,impressions,clicks}&access_token=%s"% token

Then I check using:

result = requests.get(first)
content_dict = json.loads(result.content)
print(content_dict)

and this is a sample of the output I get:

{'data': [{'created_time': '2019-02-15T17:24:29+0100', 'name': '20122301-FB-BOOST-EVENT-CC SDSDSD', 'effective_status': 'PAUSED', 'id': '6118169436761'}

There is only the name of the campaign and not insights !
Anyone did retrieve stats/insights for paused ads/campaigns before or not?

Thanks !

Please check my other post of my python script : I can't fetch stats for all my facebook campaigns using Python and Facebook Marketing API

2

Answers


  1. Chosen as BEST ANSWER

    After days of digging around, I finally come up with a script that I did run to extract 3 years of facebook ads insights avoiding the rate limit of the facebook API.

    First, we import the lib we'll need :

    from facebookads.api import FacebookAdsApi
    from facebookads.adobjects.adsinsights import AdsInsights
    from facebookads.adobjects.adaccount import AdAccount
    from facebookads.adobjects.business import Business
    import datetime
    import csv
    import re 
    import pandas as pd
    import numpy as np
    import matplotlib as plt
    from google.colab import files
    import time
    

    Please note that after extracting the insights, I'm saving them on Google Cloud storage then on Big Query tables.

    access_token = 'my-token'
    ad_account_id = 'act_id'
    app_secret = 'app_s****'
    app_id = 'app_id****'
    FacebookAdsApi.init(app_id,app_secret, access_token=access_token, api_version='v3.2')
    account = AdAccount(ad_account_id)
    

    Then, the following scripts calls the api and check the rate limit we did reach:

    import logging
    import requests as rq
    
    #Function to find the string between two strings or characters
    def find_between( s, first, last ):
        try:
            start = s.index( first ) + len( first )
            end = s.index( last, start )
            return s[start:end]
        except ValueError:
            return ""
    
    #Function to check how close you are to the FB Rate Limit
    def check_limit():
        check=rq.get('https://graph.facebook.com/v3.1/'+ad_account_id+'/insights?access_token='+access_token)
        usage=float(find_between(check.headers['x-ad-account-usage'],':','}'))
        return usage
    

    Now, this is the whole script that you can run to extract data of the last X days !

    Y = number of days 
    for x in range(1, Y):
    
      date_0 = datetime.datetime.now() - datetime.timedelta(days=x )
      date_ = date_0.strftime('%Y-%m-%d')
      date_compact = date_.replace('-', '')
      filename = 'fb_%s.csv'%date_compact
      filelocation = "./"+ filename
        # Open or create new file 
      try:
          csvfile = open(filelocation , 'w+', 777)
      except:
          print ("Cannot open file.")
    
    
      # To keep track of rows added to file
      rows = 0
    
      try:
          # Create file writer
          filewriter = csv.writer(csvfile, delimiter=',')
          filewriter.writerow(['date','ad_name', 'adset_id', 'adset_name', 'campaign_id', 'campaign_name', 'clicks', 'impressions', 'spend'])
      except Exception as err:
          print(err)
      # Iterate through all accounts in the business account
    
      ads = account.get_insights(params={'time_range': {'since':date_, 'until':date_}, 'level':'ad' }, fields=[AdsInsights.Field.ad_name, AdsInsights.Field.adset_id, AdsInsights.Field.adset_name, AdsInsights.Field.campaign_id, AdsInsights.Field.campaign_name, AdsInsights.Field.clicks, AdsInsights.Field.impressions, AdsInsights.Field.spend ])
      for ad in ads:
    
        # Set default values in case the insight info is empty
        date = date_
        adsetid = ""
        adname = ""
        adsetname = ""
        campaignid = ""
        campaignname = ""
        clicks = ""
        impressions = ""
        spend = ""
    
        # Set values from insight data
        if ('adset_id' in ad) :
            adsetid = ad[AdsInsights.Field.adset_id]
        if ('ad_name' in ad) :
            adname = ad[AdsInsights.Field.ad_name]
        if ('adset_name' in ad) :
            adsetname = ad[AdsInsights.Field.adset_name]
        if ('campaign_id' in ad) :
            campaignid = ad[AdsInsights.Field.campaign_id]
        if ('campaign_name' in ad) :
            campaignname = ad[AdsInsights.Field.campaign_name]
        if ('clicks' in ad) : # This is stored strangely, takes a few steps to break through the layers
            clicks = ad[AdsInsights.Field.clicks]
        if ('impressions' in ad) : # This is stored strangely, takes a few steps to break through the layers
            impressions = ad[AdsInsights.Field.impressions]
        if ('spend' in ad) :
            spend = ad[AdsInsights.Field.spend]
    
        # Write all ad info to the file, and increment the number of rows that will display
        filewriter.writerow([date_, adname, adsetid, adsetname, campaignid, campaignname, clicks, impressions, spend])
        rows += 1
    
      csvfile.close()
    
    # Print report
      print (str(rows) + " rows added to the file " + filename)
      print(check_limit(), 'reached of rate limit')
    ## write to GCS and BQ
      blob = bucket.blob('fb_2/fb_%s.csv'%date_compact)
      blob.upload_from_filename(filelocation)
      load_job_config = bigquery.LoadJobConfig()
      table_name = '0_fb_ad_stats_%s' % date_compact
      load_job_config.write_disposition = 'WRITE_TRUNCATE'
      load_job_config.skip_leading_rows = 1
    
      # The source format defaults to CSV, so the line below is optional.
      load_job_config.source_format = bigquery.SourceFormat.CSV
      load_job_config.field_delimiter = ','
      load_job_config.autodetect = True
      uri = 'gs://my-project/fb_2/fb_%s.csv'%date_compact
      load_job = bq_client.load_table_from_uri(
        uri,
        dataset.table(table_name),
        job_config=load_job_config)  # API request
      print('Starting job {}'.format(load_job.job_id))
      load_job.result()  # Waits for table load to complete.
      print('Job finished.')
    
      if (check_limit()>=75):
        print('75% Rate Limit Reached. Cooling Time 5 Minutes.')
        logging.debug('75% Rate Limit Reached. Cooling Time Around 3 Minutes And Half.')
        time.sleep(225)
    

    This did perfectly works but note that if you're planning to extract 3 years of data, the script will take a lot of time to run !

    I'd like to thank LucyTurtle and Ashish Baid for their scripts that did help me during my work!

    Please refer to this post if you need more details or if you need to extract data for one day for different ad accounts :

    Facebook Marketing API - Python to get Insights - User Request Limit Reached


  2. You could combine more filtering criteria as example, for filter paused campaign, that the name contain the string name and start from the 1 march you can use:

    act_105433210/campaigns?filtering=[{'field':'effective_status','operator':'IN','value':['PAUSED']},{'field':'name','operator':'CONTAIN','value':'name'},{'field':'created_time','operator':'GREATER_THAN','value':'1551444673'}]&fields=created_time,name,effective_status,insights{spend,impressions,clicks}
    

    the timestamp should be an epoch timestamp, in the example is the:

    Epoch timestamp: 1551444673
    Human time (GMT): Friday, March 1, 2019
    12:51:13 PM

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search