skip to Main Content

Due to reaching the hourly request allocation limit on my current external API, I’ve created multiple additional APIs. Now, I’m seeking a strategy to effectively manage these APIs in a sequential manner. How can I ensure a seamless transition between the APIs, utilizing them one by one, to prevent any disruptions in service while staying within the request limits of each API?

Please give some code so that I use API_1,API_2,API_3 for same purpose multiple time in same code in nodejs.

3

Answers


  1. Here’s some code: Just make a queue.

    const API_1 = () => console.log("hello from API_1")
    const API_2 = () => console.log("hello from API_2")
    const API_3 = () => console.log("hello from API_3")
    
    var arr_api = [API_1, API_2, API_3]
    var pointer = 0;
    
    function callAPI() {
      var the_api = arr_api[pointer]
    
      the_api();
    
      pointer++
      if (pointer >= arr_api.length) {
        pointer = 0;
      }
    
    }
    
    setInterval(callAPI, 1000);
    Login or Signup to reply.
  2. Here is example with limitation

    function makeQueueWithLimit(limit, ...fns) {
    let index = 0
    let count = 0
    
    return function() {
        if(count >= limit) {
            index++
            count = 0
        }
    
        if(index >= fns.length) {
            index = 0
        }
    
        const fn = fns[index]
    
        fn()
        count++
    
        }
    }
    
    function fn1() {
        console.log(`fn 1`)
    }
    
    function fn2() {
        console.log(`fn 2`)
    }
    
    function fn3() {
        console.log(`fn 3`)
    }
    
    const queueFn = makeQueueWithLimit(5, fn1, fn2, fn3)
    
    const int = setInterval(() => {
        queueFn()
    }, 100)
    
    setTimeout(() => {
        clearInterval(int)
    }, 2000)
    
    Login or Signup to reply.
  3. Instead of finding a way around the limit. I suggest you first review the API calls you do make. Here are some general tips.


    Are you making any 1+N calls?

    A 1+N call is when you request a collection, then for each result in the collection fire another request. Let me use SQL as an example to show you what I mean.

    Say I requests some blog posts with SELECT * FROM posts LIMIT 20 OFFSET 40 (20 per page, 3rd page). I can then iterate over the posts and requests the comments for each post using SELECT * FROM comments WHERE post_id = ? where ? is set to the id of the post.

    The above scenario would yield the correct results, but fires 1 + 20 requests to get these results. This can be reduced by requesting the comments for all the relevant posts in 1 request, then combine the results programmatically. So instead of requesting SELECT * FROM comments WHERE post_id = ? for each post we do a single SELECT * FROM comments WHERE post_id IN (?, ?, ...) request. This will give us all the comments we need in a total of 2 requests.

    See: What is the "N+1 selects problem" in ORM (Object-Relational Mapping)?

    The same applies for API calls. If you’re making any 1+N requests, check if you can optimize the amount of requests, by requesting multiple resources in a single API call.


    Does the API have an endpoint for batch actions/operations?

    In some scenarios you might want to update/invoke an action upon multiple resources, but the API doesn’t allow you to pass a collection of updates to the relevant endpoint. In these scenarios an API batch call comes in handy. Implementation and usage depend entirely on the API, but some APIs allow you to batch multiple requests together and send them as a single API call.

    An example of this can be seen in the Mailchimp API.


    Can you reduce requests by caching data?

    In some scenarios you might be requesting the same data over and over again. In these situations you might want to try and cache the data, reducing the API stress.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search