I ran into this issue below, when trying to run a simple pyspark script in Azure:
%%pyspark
df = spark.read.load(‘abfss://[email protected]/userdata1.parquet’, format=’parquet’)
display(df.limit(10))
InvalidHttpRequestToLivy: Your Spark job requested 24 vcores. However, the workspace has a 12 core limit. Try reducing the numbers of vcores requested or increasing your vcore quota. HTTP status code: 400. Trace ID: 3308513f-be78-408b-981b-cd6c81eea8b0.
I am new to Azure and using the free trial now. Do you know how to reduce the numbers of vcores requested?
Thanks a lot
3
Answers
You can solve this problem by just closing the other notebook tabs. It will simply close that session.
Got the same issue while trying to run a pyspark script and found that there are two ways to solve it:
by running a job or notebook
You can get more info here
Found some relevant link from microsoft which can help :
Reduce your usage of the pool resources before submitting a new resource request by running a job or notebook
You can scale up the node size and the number of nodes
or try this
azure synapse analytics : apache spark
Increase VM-family vCPU quotas