Skip to content
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 7 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,50 +13,18 @@ There are several ways to run the tutorial notebooks:
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/fastmachinelearning/hls4ml-tutorial/HEAD)

## Conda
Running the tutorials requires AMD Vitis HLS to be installed, see [here](https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/vitis.html).
After the installation, the necessary environmental variables can be set using
```
source /path/to/your/installtion/Xilinx/Vitis_HLS/202X.X/settings64.(c)sh
```

The Python environment used for the tutorials is specified in the `environment.yml` file.
It can be setup like:
```bash
conda env create -f environment.yml
conda activate hls4ml-tutorial
```

## Docker without Vivado
Pull the prebuilt image from the GitHub Container Registry:
```bash
docker pull ghcr.io/fastmachinelearning/hls4ml-tutorial/hls4ml-0.8.0:latest
```

Follow these steps to build a Docker image that can be used locally, or on a JupyterHub instance.
You can build the image (without Vivado):
```bash
docker build https://github.com/fastmachinelearning/hls4ml-tutorial -f docker/Dockerfile
```
Alternatively, you can clone the repository and build locally:
```bash
git clone https://github.com/fastmachinelearning/hls4ml-tutorial
cd hls4ml-tutorial
docker build -f docker/Dockerfile -t ghcr.io/fastmachinelearning/hls4ml-tutorial/hls4ml-0.8.0:latest .
```
Then to start the container:
```bash
docker run -p 8888:8888 ghcr.io/fastmachinelearning/hls4ml-tutorial/hls4ml-0.8.0:latest
```
When the container starts, the Jupyter notebook server is started, and the link to open it in your browser is printed.
You can clone the repository inside the container and run the notebooks.

## Docker with Vivado
Pull the prebuilt image from the GitHub Container Registry:
```bash
docker pull ghcr.io/fastmachinelearning/hls4ml-tutorial/hls4ml-0.8.0-vivado-2019.2:latest
```

To build the image with Vivado, run (Warning: takes a long time and requires a lot of disk space):
```bash
docker build -f docker/Dockerfile.vivado -t ghcr.io/fastmachinelearning/hls4ml-tutorial/hls4ml-0.8.0-vivado-2019.2:latest .
```
Then to start the container:
```bash
docker run -p 8888:8888 ghcr.io/fastmachinelearning/hls4ml-tutorial/hls4ml-0.8.0-vivado-2019.2:latest
source /path/to/your/installtion/Xilinx/Vitis_HLS/202X.X/settings64.(c)sh
```

## Companion material
Expand Down
40 changes: 0 additions & 40 deletions docker/Dockerfile

This file was deleted.

48 changes: 0 additions & 48 deletions docker/Dockerfile.vivado

This file was deleted.

33 changes: 0 additions & 33 deletions docker/install_vivado.sh

This file was deleted.

25 changes: 0 additions & 25 deletions docker/start-notebook.sh

This file was deleted.

30 changes: 0 additions & 30 deletions docker/vivado_cfg.txt

This file was deleted.

13 changes: 6 additions & 7 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,20 @@ name: hls4ml-tutorial
channels:
- conda-forge
dependencies:
- python=3.10.10
- jupyter_contrib_nbextensions==0.7.0
- jupyterhub==3.1.1
- jupyter-book==0.15.1
- python=3.10.16
- jupyter_contrib_nbextensions
- jupyterhub
- jupyter-book
- jsonschema-with-format-nongpl
- pydot==1.4.2
- graphviz==7.1.0
- scikit-learn==1.2.2
- tensorflow==2.11.1
- tensorflow==2.14.0
- tensorflow-datasets==4.8.3
- webcolors
- widgetsnbextension==3.6.0
- pip==23.0.1
- pip:
- hls4ml[profiling]==0.8.0
- qkeras==0.9.0
- hls4ml[profiling]==1.0.0
- conifer==0.2b0
- pysr==0.16.3
34 changes: 21 additions & 13 deletions part1_getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
"tf.random.set_seed(seed)\n",
"import os\n",
"\n",
"os.environ['PATH'] = os.environ['XILINX_VIVADO'] + '/bin:' + os.environ['PATH']"
"os.environ['PATH'] = os.environ['XILINX_VITIS'] + '/bin:' + os.environ['PATH']"
]
},
{
Expand Down Expand Up @@ -188,7 +188,7 @@
" X_train_val,\n",
" y_train_val,\n",
" batch_size=1024,\n",
" epochs=30,\n",
" epochs=10,\n",
" validation_split=0.25,\n",
" shuffle=True,\n",
" callbacks=callbacks.callbacks,\n",
Expand Down Expand Up @@ -224,14 +224,13 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Convert the model to FPGA firmware with hls4ml\n",
"Now we will go through the steps to convert the model we trained to a low-latency optimized FPGA firmware with hls4ml.\n",
"First, we will evaluate its classification performance to make sure we haven't lost accuracy using the fixed-point data types. \n",
"Then we will synthesize the model with Vivado HLS and check the metrics of latency and FPGA resource usage.\n",
"Then we will synthesize the model with Vitis HLS and check the metrics of latency and FPGA resource usage.\n",
"\n",
"### Make an hls4ml config & model\n",
"The hls4ml Neural Network inference library is controlled through a configuration dictionary.\n",
Expand All @@ -246,13 +245,13 @@
"source": [
"import hls4ml\n",
"\n",
"config = hls4ml.utils.config_from_keras_model(model, granularity='model')\n",
"config = hls4ml.utils.config_from_keras_model(model, granularity='model', backend='Vitis')\n",
"print(\"-----------------------------------\")\n",
"print(\"Configuration\")\n",
"plotting.print_dict(config)\n",
"print(\"-----------------------------------\")\n",
"hls_model = hls4ml.converters.convert_from_keras_model(\n",
" model, hls_config=config, output_dir='model_1/hls4ml_prj', part='xcu250-figd2104-2L-e'\n",
" model, hls_config=config, backend='Vitis', output_dir='model_1/hls4ml_prj', part='xcu250-figd2104-2L-e'\n",
")"
]
},
Expand Down Expand Up @@ -327,21 +326,23 @@
"metadata": {},
"source": [
"## Synthesize\n",
"Now we'll actually use Vivado HLS to synthesize the model. We can run the build using a method of our `hls_model` object.\n",
"Now we'll actually use Vitis HLS to synthesize the model. We can run the build using a method of our `hls_model` object.\n",
"After running this step, we can integrate the generated IP into a workflow to compile for a specific FPGA board.\n",
"In this case, we'll just review the reports that Vivado HLS generates, checking the latency and resource usage.\n",
"In this case, we'll just review the reports that Vitis HLS generates, checking the latency and resource usage.\n",
"\n",
"**This can take several minutes.**\n",
"\n",
"While the C-Synthesis is running, we can monitor the progress looking at the log file by opening a terminal from the notebook home, and executing:\n",
"\n",
"`tail -f model_1/hls4ml_prj/vivado_hls.log`"
"`tail -f model_1/hls4ml_prj/vitis_hls.log`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"hls_model.build(csim=False)"
Expand All @@ -352,7 +353,7 @@
"metadata": {},
"source": [
"## Check the reports\n",
"Print out the reports generated by Vivado HLS. Pay attention to the Latency and the 'Utilization Estimates' sections"
"Print out the reports generated by Vitis HLS. Pay attention to the Latency and the 'Utilization Estimates' sections"
]
},
{
Expand All @@ -373,11 +374,18 @@
"Calculate how many multiplications are performed for the inference of this network...\n",
"(We'll discuss the outcome)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand All @@ -391,7 +399,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.6"
"version": "3.10.16"
}
},
"nbformat": 4,
Expand Down
Loading
Loading