Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/docs/objectDetection/opi.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@ PhotonVision currently ONLY supports 640x640 Ultralytics YOLOv5, YOLOv8, and YOL
Only quantized models are supported, so take care when exporting to select the option for quantization.
:::

PhotonVision now ships with a {{ '[Python Notebook](https://github.com/PhotonVision/photonvision/blob/{}/scripts/rknn_conversion.ipynb)'.format(git_tag_ref) }} that you can use in [Google Colab](https://colab.research.google.com) or in a local environment. In Google Colab, you can simply paste the PhotonVision GitHub URL into the "GitHub" tab and select the `rknn_conversion.ipynb` notebook without needing to manually download anything.
PhotonVision now ships with a {{ '[Python Notebook](https://github.com/PhotonVision/photonvision/blob/{}/scripts/rknn_conversion.ipynb)'.format(git_tag_ref) }} that you can use in [Google Colab](https://colab.research.google.com) or in a local **Linux** environment (since `rknn-toolkit2` only supports Linux). In Google Colab, you can simply paste the PhotonVision GitHub URL into the "GitHub" tab and select the `rknn_conversion.ipynb` notebook without needing to manually download anything.

Please ensure that the model you are attempting to convert is among the {ref}`supported models <docs/objectDetection/opi:Supported Models>` and using the PyTorch format.
2 changes: 1 addition & 1 deletion docs/source/docs/objectDetection/rubik.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ PhotonVision currently ONLY supports 640x640 Ultralytics YOLOv8 and YOLOv11 mode
Only quantized models are supported, so take care when exporting to select the option for quantization.
:::

PhotonVision now ships with a {{ '[Python Notebook](https://github.com/PhotonVision/photonvision/blob/{}/scripts/rubik_conversion.ipynb)'.format(git_tag_ref) }} that you can use in [Google Colab](https://colab.research.google.com) or in a local environment. In Google Colab, you can simply paste the PhotonVision GitHub URL into the "GitHub" tab and select the `rubik_conversion.ipynb` notebook without needing to manually download anything.
PhotonVision now ships with a {{ '[Python Notebook](https://github.com/PhotonVision/photonvision/blob/{}/scripts/rubik_conversion.ipynb)'.format(git_tag_ref) }} that you can use in [Google Colab](https://colab.research.google.com), [Kaggle](https://kaggle.com/code), or in a local environment. In Google Colab, you can simply paste the PhotonVision GitHub URL into the "GitHub" tab and select the `rubik_conversion.ipynb` notebook without needing to manually download anything.

Please ensure that the model you are attempting to convert is among the {ref}`supported models <docs/objectDetection/rubik:Supported Models>` and using the PyTorch format.

Expand Down
8 changes: 6 additions & 2 deletions scripts/rknn_conversion.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,11 @@
"\n",
"### Before you start\n",
"\n",
"If you are not using Google Colab, it is recommended to create a separate [Python virtual environment](https://docs.python.org/3/library/venv.html) before you run this project. This ensures that packages installed for the conversion process do not conflict with other packages you may already have set up."
"If you are not using Google Colab, it is recommended to create a separate [Python virtual environment](https://docs.python.org/3/library/venv.html) before you run this project. This ensures that packages installed for the conversion process do not conflict with other packages you may already have set up.\n",
"\n",
"## ⚠️ Linux Only\n",
"This notebook can only by run on **Linux** because the `rknn-toolkit2` Python package only supports Linux builds.\n",
"If you don’t have access to a Linux system, consider using a cloud service like [Google Colab](https://colab.research.google.com).\n"
],
"id": "65e9f457d12dcc6b"
},
Expand Down Expand Up @@ -194,7 +198,7 @@
" check_git_installed()\n",
"\n",
" if not version in valid_yolo_versions:\n",
" print(f\"YOLO version \\\"{version}\\\" is not a valid version! Valid versions are: {\", \".join(valid_yolo_versions)}\")\n",
" print(f\"YOLO version \\\"{version}\\\" is not a valid version! Valid versions are: {', '.join(valid_yolo_versions)}\")\n",
"\n",
" try:\n",
" if version.lower() == \"yolov5\":\n",
Expand Down
156 changes: 79 additions & 77 deletions scripts/rubik_conversion.ipynb
Original file line number Diff line number Diff line change
@@ -1,82 +1,84 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "1tMAqVl4p58r"
},
"source": [
"## YOLO to Rubik TFlite Conversion"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nAbygyUYp58s"
},
"source": [
"#### Requirements\n",
"\n",
"This notebook can be run on Colab. However, Colab has some incompatibility issues that result in needing to restart the notebook in the middle of the run. This is normal, and after restarting you should rerun the below cell.\n",
"\n",
"Prior to running the notebook, it is necessary to make an account on [Qualcomm's AI Hub](https://app.aihub.qualcomm.com/account/), and obtain your API token. Then, replace <YOUR_API_TOKEN> with your API token in the cell below.\n",
"\n",
"Documentation for the Qualcomm AI Hub can be found [here](https://app.aihub.qualcomm.com/docs/index.html).\n",
"\n",
"You should also have a PyTorch model (ending in `.pt`) that's been uploaded to the runtime that you intend to convert. After uploading, copy it's absolute path by right-clicking on the file, and replace /PATH/TO/WEIGHTS.\n",
"\n",
"**NOTE: your API key will be listed in the output, and should therefore be redacted if the output is shared.**\n",
"\n",
"Once the run has finished, open the AI Hub link, and download the tflite model for the job you just ran.\n",
"\n",
"If you want to use this notebook to convert a yolo11 model, you'll need to replace all instances of `yolov8` in the cell below with `yolov11`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"id": "aX3JcSFKp58s",
"outputId": "f2cdadd2-c448-4d8c-c681-c19decef7f3e"
},
"outputs": [],
"source": [
"# This installs Python package\n",
"!pip install qai-hub-models[yolov8_det]\n",
"# sets up AI Hub enviroment\n",
"!qai-hub configure --api_token <YOUR_API_TOKEN>\n",
"# Converts the model to be ran on RB3Gen2\n",
"!yes | python -m qai_hub_models.models.yolov8_det.export --quantize w8a8 --device=\"RB3 Gen 2 (Proxy)\" --ckpt-name /PATH/TO/WEIGHTS --device-os linux --target-runtime tflite --output-dir .\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0I2cXQO4p58s"
},
"source": [
"Modified from https://github.com/ramalamadingdong/yolo-rb3gen2-trainer/blob/main/AI_Hub_Quanitization_RB3Gen2.ipynb"
]
}
],
"metadata": {
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "1tMAqVl4p58r"
},
"source": [
"## YOLO to Rubik TFlite Conversion"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nAbygyUYp58s"
},
"source": [
"#### Requirements\n",
"\n",
"This notebook can be run on Colab. However, Colab has some incompatibility issues that result in needing to restart the notebook in the middle of the run. This is normal, and after restarting you should rerun the below cell.\n",
"\n",
"If you aren't using Google Colab, we recommend creating a [Python venv](https://docs.python.org/3/library/venv.html) so that the packages installed for conversion do not conflict with your existing setup.\n",
"\n",
"Prior to running the notebook, it is necessary to make an account on [Qualcomm's AI Hub](https://app.aihub.qualcomm.com/account/), and obtain your API token. Then, replace <YOUR_API_TOKEN> with your API token in the cell below.\n",
"\n",
"Documentation for the Qualcomm AI Hub can be found [here](https://app.aihub.qualcomm.com/docs/index.html).\n",
"\n",
"You should also have a PyTorch model (ending in `.pt`) that's been uploaded to the runtime that you intend to convert. After uploading, copy it's absolute path by right-clicking on the file, and replace /PATH/TO/WEIGHTS.\n",
"\n",
"**NOTE: your API key will be listed in the output, and should therefore be redacted if the output is shared.**\n",
"\n",
"Once the run has finished, open the AI Hub link, and download the tflite model for the job you just ran.\n",
"\n",
"If you want to use this notebook to convert a yolo11 model, you'll need to replace all instances of `yolov8` in the cell below with `yolov11`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
"base_uri": "https://localhost:8080/",
"height": 1000
},
"language_info": {
"name": "python",
"version": "3.11.7"
}
"id": "aX3JcSFKp58s",
"outputId": "f2cdadd2-c448-4d8c-c681-c19decef7f3e"
},
"outputs": [],
"source": [
"# This installs Python package\n",
"!pip install qai-hub-models[yolov8_det]\n",
"# sets up AI Hub enviroment\n",
"!qai-hub configure --api_token <YOUR_API_TOKEN>\n",
"# Converts the model to be ran on RB3Gen2\n",
"!yes | python -m qai_hub_models.models.yolov8_det.export --quantize w8a8 --device=\"RB3 Gen 2 (Proxy)\" --ckpt-name /PATH/TO/WEIGHTS --device-os linux --target-runtime tflite --output-dir .\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0I2cXQO4p58s"
},
"source": [
"Modified from https://github.com/ramalamadingdong/yolo-rb3gen2-trainer/blob/main/AI_Hub_Quanitization_RB3Gen2.ipynb"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"nbformat": 4,
"nbformat_minor": 0
"language_info": {
"name": "python",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Loading