skip to Main Content

I ran into this issue below, when trying to run a simple pyspark script in Azure:

%%pyspark
df = spark.read.load(‘abfss://[email protected]/userdata1.parquet’, format=’parquet’)
display(df.limit(10))

InvalidHttpRequestToLivy: Your Spark job requested 24 vcores. However, the workspace has a 12 core limit. Try reducing the numbers of vcores requested or increasing your vcore quota. HTTP status code: 400. Trace ID: 3308513f-be78-408b-981b-cd6c81eea8b0.

I am new to Azure and using the free trial now. Do you know how to reduce the numbers of vcores requested?

Thanks a lot

3

Answers


  1. You can solve this problem by just closing the other notebook tabs. It will simply close that session.

    Login or Signup to reply.
  2. Got the same issue while trying to run a pyspark script and found that there are two ways to solve it:

    1. Reduce your usage of the pool resources before submitting a new resource request
      by running a job or notebook
    2. You can scale up the node size and the number of nodes

    You can get more info here

    Login or Signup to reply.
  3. Found some relevant link from microsoft which can help :

    1. Reduce your usage of the pool resources before submitting a new resource request by running a job or notebook

    2. You can scale up the node size and the number of nodes

    3. or try this

    azure synapse analytics : apache spark

    Increase VM-family vCPU quotas

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search