KBA-231226181840
1. Setup Envirnment
1.1. Installa Nvidia Driver è CUDA
1.2. Installa a libreria Python Related
python3 -m pip install -upgrade -ignore-installed pip
python3 -m pip install -ignore-installed gdown
python3 -m pip install -ignore-installed opencv-python
python3 -m pip install -ignore-installed torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
python3 -m pip install -ignore-installed jax
python3 -m pip install -ignore-installed ftfy
python3 -m pip install -ignore-installed torchiinfo
python3 -m pip install -ignore-installed https://github.com/quic/aimet/releases/download/1.25.0/AimetCommon-torch_gpu_1.25.0-cp38-cp38-linux_x86_64.whl
python3 -m pip install -ignore-installed https://github.com/quic/aimet/releases/download/1.25.0/AimetTorch-torch_gpu_1.25.0-cp38-cp38-linux_x86_64.whl
python3 -m pip install -ignore-installed numpy==1.21.6
python3 -m pip install -ignore-installed psutil
1.3. Clone aimet-model-zoo
git clone https://github.com/quic/aimet-model-zoo.git
cd aimet-model-zoo
git checkout d09d2b0404d10f71a7640a87e9d5e5257b028802
esportà PYTHONPATH=${PYTHONPATH}:${PWD}
1.4. Scaricate Set14
wget https://uofi.box.com/shared/static/igsnfieh4lz68l926l8xbklwsnnk8we9.zip
unzip igsnfieh4lz68l926l8xbklwsnnk8we9.zip
1.5. Mudificà a linea 39 aimet-model-zoo/aimet_zoo_torch/quicksrnet/dataloader/utils.py
cambià
per img_path in glob.glob(os.path.join(test_images_dir, "*")):
à
per img_path in glob.glob (os.path.join (test_images_dir, "*_HR.*")):
1.6. Eseguite a valutazione.
# run sottu YOURPATH/aimet-model-run
# Per quicksrnet_small_2x_w8a8
python3 aimet_zoo_torch/quicksrnet/evaluators/quicksrnet_quanteval.py \
–model-config quicksrnet_small_2x_w8a8 \
–dataset-path ../Set14/image_SRF_4
# Per quicksrnet_small_4x_w8a8
python3 aimet_zoo_torch/quicksrnet/evaluators/quicksrnet_quanteval.py \
–model-config quicksrnet_small_4x_w8a8 \
–dataset-path ../Set14/image_SRF_4
# Per quicksrnet_medium_2x_w8a8
python3 aimet_zoo_torch/quicksrnet/evaluators/quicksrnet_quanteval.py \
–model-config quicksrnet_medium_2x_w8a8 \
–dataset-path ../Set14/image_SRF_4
# Per quicksrnet_medium_4x_w8a8
python3 aimet_zoo_torch/quicksrnet/evaluators/quicksrnet_quanteval.py \
–model-config quicksrnet_medium_4x_w8a8 \
–dataset-path ../Set14/image_SRF_4
Supponete chì uttene u valore PSNR per u mudellu simulatu. Pudete cambià u mudellu-config per diverse dimensioni di QuickSRNet, l'opzione hè underaimet-modelzoo/aimet_zoo_torch/quicksrnet/model/model_cards/.
2 Aggiungi Patch
2.1. Aprite "Export to ONNX Steps REVISED.docx"
2.2. Salta l'id di cummit git
2.3. Sezzione 1 Codice
Aghjunghjite tuttu u codice 1. sottu l'ultima linea (dopu à a linea 366) aimet-model-zoo/aimet_zoo_torch/quicksrnet/model/models.py
2.4. Sezione 2 è 3 Code
Aghjunghjite tuttu u codice 2, 3 sottu a linea 93 aimet-model-zoo/aimet_zoo_torch/quicksrnet/evaluators/quicksrnet_quanteval.py
2.5. Parametri chjave in Function load_model
mudellu = load_model (MODEL_PATH_INT8,
MODEL_NAME,
MODEL_ARGS.get(MODEL_NAME).get(MODEL_CONFIG),
use_quant_sim_model=Veru,
encoding_path=ENCODING_PATH,
quantsim_config_path=CONFIG_PATH,
calibration_data=IMAGES_LR,
use_cuda=Veru,
before_quantization=Veru,
convert_to_dcr = True)
MODEL_PATH_INT8 = aimet_zoo_torch/quicksrnet/model/weights/quicksrnet_small_2x_w8a8/pre_opt_weights
MODEL_NAME = QuickSRNetSmall
MODEL_ARGS.get (MODEL_NAME).get (MODEL_CONFIG) = {'scaling_factor': 2}
ENCODING_PATH = aimet_zoo_torch/quicksrnet/model/weights/quicksrnet_small_2x_w8a8/adaround_encodings
CONFIG_PATH = aimet_zoo_torch/quicksrnet/model/weights/quicksrnet_small_2x_w8a8/aimet_config
Per piacè rimpiazzà e variabili per diverse dimensioni di QuickSRNet
2.6 Mudificazione Size Model
- "input_shape" in aimet-model-zoo/aimet_zoo_torch/quicksrnet/model/model_cards/*.json
- Funzione interna load_model(…) in aimet-model-zoo/aimet_zoo_torch/quicksrnet/model/inference.py
- Parametru in a funzione export_to_onnx(…, input_height, input_width) da "Export to ONNX Steps REVISED.docx"
2.7 Re-Run 1.6 di novu per esportà u mudellu ONNX
3. Cunvertisce in SNPE
3.1. Cunvertisce
${SNPE_ROOT}/bin/x86_64-linux-clang/snpe-onnx-to-dlc \
–input_network model.onnx \
–quantization_overrides ./model.encodings
3.2. (Opcional) Estrae solu DLC quantificatu
(opzionale) snpe-dlc-quant –input_dlc model.dlc –float_fallback –override_params
3.3. (IMPORTANTE) L'ONNX I / O hè in ordine di NCHW; U DLC cunvertitu hè in ordine NHWC
Documenti / Risorse
![]() |
Documentazione di Qualcomm Aimet Efficiency Toolkit [pdf] Istruzzioni quicksrnet_small_2x_w8a8, quicksrnet_small_4x_w8a8, quicksrnet_medium_2x_w8a8, quicksrnet_medium_4x_w8a8, Aimet Efficiency Toolkit Documentation, Efficiency Toolkit Documentation, Toolkit Documentation, Documentation |




