This blog below explains how to use a Python only notebook to get all the Fabric items using the Fabric REST API.

NOTE: At the time of this blog post Feb 2025, Dataflow Gen2 is not included in the Fabric items, I am sure it will be there in the future.

NOTE II: This only gets the Fabric Items, which does not include the Power BI Items.

I thought I would start by saying that using the Python only notebook uses significantly less Capacity Units than the a Spark notebook.

As shown below, where you can see that the Python notebook consumed 22 CUs vs the 401 Cus, which is roughly 18X or 1,800% less consumption.

A screen shot of a computer

AI-generated content may be incorrect.

Python Notebook

I wanted to see if it was possible to do this with a Python Notebook.

I must admit that this took me a lot longer than what I thought to get this working. I did not realize that a lot of the functions that I had previously used were Spark functions.

As shown below, this is my Python notebook.

A close up of a logo

AI-generated content may be incorrect.

Authentication

I could use my existing code to authenticate using my Service Principal account.

Getting all the Fabric Items

In the next piece of code below, which I did not have to change from previous code.

I am downloading the file to the files section, so that I have got a copy of the JSON output. In the event if I need to bring in the data again.

Converting JSON file to Lakehouse table

This is where it took me quite a while to get this to work. I was determined to get this done using Python code and as shown below I got it working.

Credit to Fabric Python Code which assisted me in being able to read the JSON file and successfully write it to a Lakehouse table.

A screenshot of a computer

AI-generated content may be incorrect.

Writing data to a Lakehouse table

The final piece is to take the information and write it to a Lakehouse table.

Once again credit must go to Mim, where his blog post helped me figure out how to write to a Lakehouse table using DuckDB: Loading Delta Table to Fabric OneLake using Delta Rust – Small Data And self service

It is worth noting that Lines 14 and 15 are where I put in the Source and Target tables.

Line 47 is where I have found it is easier to simply overwrite the table to ensure that no new data is lost.

Summary

In this blog post I have shown how to use a Python notebook to download information from the Fabric API and store the data in a Lakehouse table.

You can find a sample of the Notebook here: Fabric/Blog – Get All Fabric Items – Actual Pure Python.ipynb at main · GilbertQue/Fabric

If there are any questions or comments, please let me know!

Thanks for reading.