skip to Main Content

Google Cloud Functions (gen 2) gives us 9 possible memory allocation settings for each function, from 128MiB to 32GiB.

  1. For a hello-world function that does nothing but log "hello world" to the console, is memory here just the total number of bytes consumed by the source code?

  2. Is this memory allocation per function invocation or per function instance (since instances can be configured to handle multiple invocations simultaneously)?

2

Answers


  1. it is true that in google cloud functions (gen 2), the memory allocation refers to the amount of memory allocated to each function instance. It is not solely based on the size of the source code. and also as should everyone expected the memory allocation determines the available resources for the function to execute, including variables, dependencies, and any additional resources required during execution.

    also the memory allocation is happens for per function instance, as instances can be configured to handle multiple invocations simultaneously. Each function instance is allocated the specified amount of memory, and this allocation remains consistent throughout the lifetime of the instance.

    hoping ,you already go through configuring-memory and min-instances ,it not please go with the references.

    Login or Signup to reply.
  2. You should read and understand the detailed documentation for Cloud Functions pricing in order to understand how you will be billed per function invocation. It’s not as simple as your question makes it out to be.

    Firstly, memory is not the only consideration. The unit of billing that involves memory is actually "compute time", which is measured as a combination of "GB-Seconds" (units of memory used per second) and "GHz-Seconds" (units of CPU used per second). Together, they are billed as a "vCPU" or virtual CPU. Also note that files you write to the tmp filesystem definitely occupy memory and are billed for usage during that time, even if you don’t delete them after a function terminates (you continue to pay for that memory for future invocations on that instance).

    When you configure memory for a function, that is actually saying how much total memory is allocated to the server instance that handles the function. To answer your question #1, the size of your source code influences this number in (likely) a very small way. There is far more memory allocated for things like the OS kernel (linux) and the other software (nodejs, etc) needed to operate the instance.

    The only thing that matters for billing is the combination of how much memory and CPU you have allocated per unit of time that functions are required to run on that configuration, per server instance required to handle the load for all the invocations. In v2, If your functions can run in parallel on the same instance, then you are not charged for that overlap in time. It is just the amount of time needed to service the invocations per running instance.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search