I’m using a notebook on Azure Databricks, this notebook is in my user repo. I want to write a csv file created by this notebook in this repo.
When i’m using the code below :
df_pandas.to_csv(‘test.csv’, index=False, header=False)
There is no error but the file is not written is the notebook’s repo.
Does someone has a clue ?
I’ve tried to write the complete path or just the csv name but the it still the same error :
Cannot save file into a non-existent directory: ‘/Users/*********/repo_one/repo_two’
2
Answers
Hi thanks for the explanation, But do you know how to write the csv file not in dbfs path but where I can retrieve it here, in the Workspace folder where my notebook is :
Thanks for the help again !
system (DBFS)
toPandas()
method is used to convert the Spark dataframe to a Pandas dataframe, and the to_csv() method is used to convert the Pandas dataframe to a CSV string. Thedbutils.fs.put()
method is used to write the CSV string to the specified file path in DBFS.The below is the code: