I am trying to get a list of all collects and when I do the API call to count them:
https://[store-username].myshopify.com/admin/collects/count.json
HTTP/1.1 200 OK
{
"count": 307
}
I know the limit is 250 and page default is 1
https://[store-username].myshopify.com/admin/collects.json?limit=250&page=1
Will get 250 records.
When I do page 2 – I’ll get exactly the same records that in page 1
https://[store-username].myshopify.com/admin/collects.json?limit=250&page=2
So for curiosity, I tried page 10 – which is out of range – that would be 2500 > 307 and it returned the same 250 as in page 1
2nd thing:
When I put it in the code / python – and I do script to run
https://[store-username].myshopify.com/admin/collects.json?limit=250&page=1
It gets 250 records and then I do Page 2
https://[store-username].myshopify.com/admin/collects.json?limit=250&page=2
and return is NONE
I am pulling my hair off and can’t get those all 307 to update my database only 250 – so I have no idea why the Page 2
in the browser is loading exactly the same records as PAGE 1 and 250 for PAGE 2 it should be 307-250 = 57 records and in the script it is pulling NONE.
Can you help?
def handle(self, *args, **options):
security = urllib2.HTTPPasswordMgrWithDefaultRealm()
security.add_password(None, "https://[store-username].myshopify.com/admin/collects/count.json",
“[credentials]”, "[credentials]")
auth_handler = urllib2.HTTPBasicAuthHandler(security)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
url = 'https://[store-username].myshopify.com/admin/collects/count.json'
collect_feed = urllib2.urlopen(url)
data = collect_feed.read()
js = json.loads(str(data))
count = int(js['count'])
page_size = 250
pages = int(math.ceil(count / page_size))
list_of_collects = Collect.objects.all()
if list_of_collects:
list_of_collects.delete()
current_page = 1
while (current_page <= pages):
opening_url = "https://[store-username].myshopify.com/admin/collects.json?limit=" + str(page_size) + '&page=' + str(current_page)
security.add_password(None, opening_url,
"[credentials]", "[credentials]")
auth_handler = urllib2.HTTPBasicAuthHandler(security)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
try:
collect_feed = urllib2.urlopen(opening_url)
except:
collect_feed = None
if collect_feed != None:
data = collect_feed.read()
try:
js = json.loads(str(data))
except:
js = None
for x in range(0, len(js['collects'])):
single_collect = list_of_collects.filter(collect_id=js['collects'][x]["id"]).first()
if single_collect == None:
# Create the model you want to save the image to
collect = Collect(collect_id=js['collects'][x]["id"],
collection_id=js['collects'][x]["collection_id"],
product_id=js['collects'][x]["product_id"])
collect.save()
print("Product and Collection connection number " + str(x) + " was successfully saved")
print("NO FEED")
print("BATCH IS SUCCESSFULLY SAVED, PROCESSING THE NEXT ONE")
current_page += 1
self.stdout.write(self.style.SUCCESS(‘Collects Updated.’))
When I run the script I got:
Product and Collection connection number 0 was successfully saved
Product and Collection connection number 1 was successfully saved
Product and Collection connection number 2 was successfully saved
[…]
Product and Collection connection number 249 was successfully saved
FIRST BATCH IS SUCCESSFULLY SAVED, PROCESSING THE NEXT ONE
sleeping for 20 seconds
NO FEED
SECOND BATCH IS SUCCESSFULLY SAVED, PROCESSING THE NEXT ONE
Collects Updated.
Based on the code PAGE 2 is returning NONE, but why is count is 307 > 250…
2
Answers
Here is the final script thanks to Daniel's fix.
You need to use
&
to separate elements in a querystring:https://[store-username].myshopify.com/admin/collects.json?limit=250&page=1
Note, you really should use the Shopify python client, though.