topgear · 2025年11月13日 · 上海

“星睿O6”AI PC开发套件评测-不同模型使用CPU和NPU的对比测试

0 - 概要

本文记录了在 Orion o6 平台上,运行官方 cix_ai_model_hub 中的模型时的步骤以及碰到的问题,包括以下内容:

  • Debian 12 系统安装;
  • 使用 Anaconda 时遇到的问题;
  • 使用 ai_model_hub 时遇到的问题;
  • libnoe python binding 问题;

1 - 准备测试环境

系统安装

Radxa 官方提供了两种系统版本,一种是预安装的系统镜像,直接复制到USB启动盘或者SSD就可以运行。我选择的是另一种系统版本,需要通过U盘启动然后一步步安装系统到SSD,系统ISO下载地址如下:

https://dl.radxa.com/orion/o6/images/debian/orion-o6-debian12-desktop-arm64-b6.iso.gz
在此有一些小插曲,我按照官方文档尝试部署 cix 格式的模型时,需要依赖 NOE 相关的 python bindings,而我安装的系统中,这些是没有预装的,所以折腾了很久。

Python 运行环境

使用 Conda 创建虚拟环境

因为官网的参考文档基本都推荐使用 Anaconda 去创建虚拟环境,所以我去 Anaconda 官网下载了一个比较新的版本 Anaconda3-2025.06-1-Linux-aarch64,最终证明这个选择不妥。使用这个 Anaconda 创建的运行环境,导入 opencv-python 时总是会有问题,尝试创建不同 Python (3.8/.310/3.11) 的虚拟环境以及安装不同版本的 opencv-python 都无法解决问题,报错如下:

(cix-model-hub-25q1) topgear@radxa-orion-o6:~$ python
Python 3.11.14 (main, Oct 21 2025, 18:24:34) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
libGLX: CIX driver check failed.
Aborted

使用 Venv 创建虚拟环境

在使用 anaconda 遇到问题后,于是使用 venv 创建虚拟环境:

# 安装 venv 包
sudo apt install python3-venv
# 创建虚拟环境 'venv'
python3 -m venv venv
# 激活虚拟环境 'ven'
source venv/bin/activate

安装通过 Python 运行模型依赖的包:

numpy
scikit-image
scikit-learn
ultralytics
pillow
ffmpeg-python
torch
transformers
imageio
onnxruntime
pycocotools
clip
shapely
pyclipper
SentencePiece
opencv-python

修复 libnoe 依赖

不知道为何我安装系统上是没有 NOE python binding 安装包的,这个包的作用大概如下:

python script
    |
libnoe python bindings
    |
libnoe umd
    |
npu driver

导致我在运行 cix-ai-hub-model 时提示找不到 libnoe 包,在预装的 cix-noe-umd-1.0.0+2503.radxa 包中只找到了 libnoe.so,却没有找到相关的 python 脚本。
后来通过此芯的”早鸟计划“申请到了 cix_noe_sdk_2025_q3_release ,做了如下对比:
image.png

实在不明白自己是如何”搞丢“这个 libnoe python 脚本的。不过最后还是通过 libnoe-2.0.0-py3-none-manylinux2014_aarch64.whl 修复了 libnoe 模块依赖的问题。

image.png

2 - 模型下载

此芯官方除了提供 ONNX 格式的模型之外,也提供已经转换好且经过优化并可以直接在 NPU 上运行的 CIX 格式的模型。如果想自己动手转换模型,也可以使用官方提供的工具和模型转换需要的配置参数。不过我在实际使用过程中发现,模型有多种获取的途径,且不同版本也依赖不同的底层SDK 版本,这方面并没有非常详细清晰地文档说明。目前下载模型地途径如下:
image.png

不同仓库中的文件内容也不尽相同:

  • github/ai_model_hub

    • cixtech/ai_model_hub
    • 模型转换相关的配置文件
    • 使用不同 backend 的推理脚本
    • 推理脚本依赖的 utils 脚本库
  • modelscope/aix/ai_model_hub_24_Q4

  • modelscope/cix/ai_model_hub_25_Q1_2

  • modelscope/cix/ai_model_hub_25_Q3

    • ai_model_hub_25_Q3 · 模型库

      • 多种 AI demo deb 安装包
    • 训练数据,测试文件
    • onnx 格式模型
    • CIX 格式模型
    • 使用不同 backend 的推理脚本
    • 推理脚本依赖的 utils 脚本库
虽然 ai_model_hub_25_Q3 最完整,但是安装的系统 kernel 版本不兼容,本着偷懒的角度选择了最匹配的 ai_model_hub_25_Q1_2 和 gihub 中的脚本库一起搭配使用。

下载 ai_model_hub_25_Q1_2

# 安装ModelScope
pip install modelscope

# 下载完整模型库,默认缓存在 ~/.cache/modelscope/ 目录中
modelscope download --model cix/ai_model_hub_25_Q1_2

下载 ai_model_hub

git clone https://github.com/cixtech/ai_model_hub.git

3 - 模型测试

inference_onxx.py 运行 ONNX 模型使用 CPU,inference_npu.py 运行 CIX 模型使用 NPU,在熟悉这些脚本之后决定用 AI 写一个 web demo,选择了 centerfaceyolov8_l 这两个模型,对比使用 CPU 和 NPU 的速度。
部署一个支持图片上传并显示人脸检测或者物品检测的 Demo,使用 Python 的 Web 框架(如 Flask)搭建服务:

  • 搭建 Web 服务(接收图片上传、返回检测结果);
  • 前端支持选模型类型和硬件选择;
  • 后端支持加载 ONNX/CIX 模型,实现人脸检测或者物品检测;
  • 后端将处理后的图片返回给前端显示;
  • 前端支持放大处理后的图片;

Demo 文件目录

demo/
├── app.py               # 主服务代码
├── static/
│   ├── css/
│   │   └── style.css    # 样式表
│   └── js/
│       └── script.js    # 前端交互逻辑
├── templates/
│   └── index.html       # 前端页面
├── models/
│   ├── centerface/
│   │   ├── onnx/centerface.onnx
│   │   └── npu/centerface.cix
│   └── resnet/
│       ├── onnx/yolov8l.onnx
│       └── npu/yolov8l.cix
├── uploads/             # 上传图片
├── results/             # 处理结果
└── ...

后端代码(app.py)

import os
import sys
import time
# Define the absolute path to the utils package by going up four directory levels from the current file location
_abs_path = os.path.join(os.path.dirname(__file__), "../../../../")
# Append the utils package path to the system path, making it accessible for imports
sys.path.append(_abs_path)
import cv2
import numpy as np
import onnxruntime as ort
from flask import Flask, request, render_template, send_from_directory, jsonify
from PIL import Image, ImageDraw
import datetime
from tqdm import tqdm


from utils.image_process import preprocess_image_centerface
from utils.face_postprocess import postprocess
from utils.image_process import preprocess_object_detect_method1
from utils.object_detect_postprocess import postprocess_yolo, xywh2xyxy
from utils.draw import draw_coco as draw
from utils.NOE_Engine import EngineInfer


app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = 'uploads'
app.config['RESULT_FOLDER'] = 'results'
app.config['MODEL_FOLDER'] = 'models'  # 模型根目录
app.config['MAX_CONTENT_LENGTH'] = 10 * 1024 * 1024  # 10MB限制
os.makedirs(app.config['UPLOAD_FOLDER'], exist_ok=True)
os.makedirs(app.config['RESULT_FOLDER'], exist_ok=True)


# 模型缓存(避免重复加载)
model_cache = {}

# 加载模型(根据选择的模型和后端)
def load_model(model_name, backend):
    cache_key = f"{model_name}_{backend}"
    if cache_key in model_cache:
        return model_cache[cache_key]

    # 模型路径
    model_path = os.path.join(
        app.config['MODEL_FOLDER'],
        model_name,
        backend,
        f"{model_name}.onnx" if backend == 'onnx' else f"{model_name}.cix"
    )
    if not os.path.exists(model_path):
        raise FileNotFoundError(f"模型文件不存在: {model_path}")

    # 根据后端加载模型
    if backend == 'onnx':
        session = ort.InferenceSession(model_path)
        model_cache[cache_key] = session
    elif backend == 'npu':
        # Get model info
        model = EngineInfer(model_path)
        model_cache[cache_key] = model
    else:
        raise ValueError(f"不支持的后端: {backend}")

    return model_cache[cache_key]


# 处理图片
def process_image(image_path, model_name, backend):
    # 加载模型
    model = load_model(model_name, backend)

    if model_name == 'centerface':
        if backend == 'onnx':
            frame = cv2.imread(image_path)
            h, w = frame.shape[:2]
            # Get model info
            output_names = [x.name for x in model.get_outputs()]
            input_names = [x.name for x in model.get_inputs()]
            det_scale=0.0

            img_h_new, img_w_new = int(np.ceil(h / 32) * 32), int(np.ceil(w / 32) * 32)
            scale_h, scale_w = img_h_new / h, img_w_new / w
            input,det_scale=preprocess_image_centerface(frame)

            begin = datetime.datetime.now()

            heatmap, scale, offset, lms=model.run(output_names, {input_names[0]: input})

            end = datetime.datetime.now()
            print("cpu times = ", end - begin)

            dets, lms = postprocess(heatmap, lms, offset, scale, 0.35, img_h_new, img_w_new, scale_h, scale_w,det_scale)

            for det in dets:
                boxes, score = det[:4], det[4]
                cv2.rectangle(frame, (int(boxes[0]), int(boxes[1])), (int(boxes[2]), int(boxes[3])), (2, 255, 0), 1)

            for lm in lms:
                for i in range(0, 5):
                    cv2.circle(frame, (int(lm[i * 2]), int(lm[i * 2 + 1])), 2, (0, 0, 255), -1)

            output_dir = os.path.join(app.config['RESULT_FOLDER'])
            os.makedirs(output_dir, exist_ok=True)
            out_image_path = os.path.join(
                output_dir, "centerface_onnx_out_" + os.path.basename(image_path)
                )

            cv2.imwrite(out_image_path, frame)

            cv2.waitKey(0)

            return out_image_path, end - begin
        else:
            frame = cv2.imread(image_path)
            h, w = frame.shape[:2]
            det_scale=0.0
            img_h_new, img_w_new = int(np.ceil(h / 32) * 32), int(np.ceil(w / 32) * 32)
            scale_h, scale_w = img_h_new / h, img_w_new / w

            input,det_scale=preprocess_image_centerface(frame)
            input=[input]

            begin = datetime.datetime.now()
            heatmap, scale, offset, lms=model.forward(input)
            end = datetime.datetime.now()
            print("npu times = ", end - begin)

            heatmap =np.reshape(heatmap,(1,1,160,160))
            scale =np.reshape(scale,(1,2,160,160))
            offset =np.reshape(offset,(1,2,160,160))
            lms =np.reshape(lms,(1,10,160,160))

            dets, lms = postprocess(heatmap, lms, offset, scale, 0.2, img_h_new, img_w_new, scale_h, scale_w,det_scale)

            for det in dets:
                boxes, score = det[:4], det[4]
                cv2.rectangle(frame, (int(boxes[0]), int(boxes[1])), (int(boxes[2]), int(boxes[3])), (2, 255, 0), 1)

            for lm in lms:
                for i in range(0, 5):
                    cv2.circle(frame, (int(lm[i * 2]), int(lm[i * 2 + 1])), 2, (0, 0, 255), -1)

            output_dir = os.path.join(app.config['RESULT_FOLDER'])
            os.makedirs(output_dir, exist_ok=True)
            out_image_path = os.path.join(
                output_dir, "centerface_npu_out_" + os.path.basename(image_path)
                )

            cv2.imwrite(out_image_path, frame)

            cv2.waitKey(0)

            return out_image_path, end - begin
    elif model_name == 'yolov8l':
        if backend == 'onnx':
            # Get list of images from the specified path
            input_name = model.get_inputs()[0].name
            label_name = model.get_outputs()[0].name
            os.makedirs(app.config['RESULT_FOLDER'], exist_ok=True)

            # Preprocess the image for inference
            src_shape, new_shape, show_image, data = preprocess_object_detect_method1(
                image_path, target_size=(640, 640), mode="BGR"
            )

            begin = datetime.datetime.now()
            # Run inference and get predictions
            pred = model.run([label_name], {input_name: data.astype(np.float32)})[0]
            end = datetime.datetime.now()
            print("cpu times = ", end - begin)
            pred = np.squeeze(pred)
            pred = np.transpose(pred, (1, 0))

            # bboxes, conf, class_id
            results = postprocess_yolo(pred, 0.3, 0.45)
            if len(results) == 0:
                output_path = os.path.join(app.config['RESULT_FOLDER'], os.path.basename(image_path))
                # Save the resulting image to the output directory
                cv2.imwrite(output_path, show_image)

            bbox_xywh = results[:, :4]
            bbox_xyxy = xywh2xyxy(bbox_xywh)
            x_scale = src_shape[1] / new_shape[1]
            y_scale = src_shape[0] / new_shape[0]
            bbox_xyxy *= (x_scale, y_scale, x_scale, y_scale)


            ret_img = draw(show_image, bbox_xyxy, results[:, 5], results[:, 4])
            output_path = os.path.join(app.config['RESULT_FOLDER'], "yolov8l_onnx_out_" +  os.path.basename(image_path))

            # Save the resulting image to the output directory
            cv2.imwrite(output_path, ret_img)

            return output_path, end - begin
        else:
            os.makedirs(app.config['RESULT_FOLDER'], exist_ok=True)
            # Get list of images from the specified path

            # Preprocess the image for inference
            src_shape, new_shape, show_image, data = preprocess_object_detect_method1(
                image_path, target_size=(640, 640), mode="BGR"
            )

            begin = datetime.datetime.now()
            # Run inference and get predictions
            pred = model.forward(data.astype(np.float32))[0]
            end = datetime.datetime.now()
            print("cpu times = ", end - begin)
            pred = np.reshape(pred, (84, 8400))
            pred = np.transpose(pred, (1, 0))

            # bboxes, conf, class_id
            results = postprocess_yolo(pred, 0.3, 0.45)
            if len(results) == 0:
                output_path = os.path.join(app.config['RESULT_FOLDER'], os.path.basename(image_path))
                # Save the resulting image to the output directory
                cv2.imwrite(output_path, show_image)

            bbox_xywh = results[:, :4]
            bbox_xyxy = xywh2xyxy(bbox_xywh)
            x_scale = src_shape[1] / new_shape[1]
            y_scale = src_shape[0] / new_shape[0]
            bbox_xyxy *= (x_scale, y_scale, x_scale, y_scale)

            ret_img = draw(show_image, bbox_xyxy, results[:, 5], results[:, 4])
            output_path = os.path.join(app.config['RESULT_FOLDER'], "yolov8l_onnx_out_" +  os.path.basename(image_path))
            # Save the resulting image to the output directory
            cv2.imwrite(output_path, ret_img)

            return output_path, end - begin



# 路由
@app.route('/')
def index():
    return render_template('index.html')


@app.route('/upload', methods=['POST'])
def upload():
    if 'image' not in request.files:
        return jsonify({'status': 'error', 'msg': '未选择图片'})
    file = request.files['image']
    if file.filename == '':
        return jsonify({'status': 'error', 'msg': '文件名为空'})
    if file and file.filename.lower().endswith(('.png', '.jpg', '.jpeg', '.bmp')):
        filename = f"{int(time.time())}_{file.filename}"  # 避免重名
        upload_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
        file.save(upload_path)
        return jsonify({
            'status': 'success',
            'filename': filename,
            'upload_path': f'/uploads/{filename}'
        })
    return jsonify({'status': 'error', 'msg': '不支持的文件格式'})


@app.route('/process/<filename>', methods=['POST'])
def process(filename):
    data = request.json
    model_name = data.get('model', 'centerface')
    backend = data.get('backend', 'onnx')

    upload_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
    if not os.path.exists(upload_path):
        return jsonify({'status': 'error', 'msg': '文件不存在'})

    try:
        result_path, process_time = process_image(upload_path, model_name, backend)
        return jsonify({
            'status': 'success',
            'result_path': f'/results/{os.path.basename(result_path)}',
            'process_time': f'{process_time}',
            'model': model_name,
            'backend': backend
        })
    except Exception as e:
        return jsonify({'status': 'error', 'msg': str(e)})


@app.route('/uploads/<filename>')
def serve_upload(filename):
    return send_from_directory(app.config['UPLOAD_FOLDER'], filename)


@app.route('/results/<filename>')
def serve_result(filename):
    return send_from_directory(app.config['RESULT_FOLDER'], filename)


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, threaded=True)  # 开启多线程支持并发

前端页面(templates/index.html)

<!DOCTYPE html>
<html lang="zh-CN">
<head>
    <meta charset="UTF-8">
    <title>模型对比测试</title>
    <link rel="stylesheet" href="/static/css/style.css">
</head>
<body>
    <div class="container">
        <h1>模型对比测试</h1>

        <!-- 配置区域 -->
        <div class="config-area">
            <div class="config-item">
                <label for="model-select">选择模型:</label>
                <select id="model-select">
                    <option value="centerface">CenterFace</option>
                    <option value="yolov8l">Yolov8l</option>
                </select>
            </div>
            <div class="config-item">
                <label for="backend-select">选择后端:</label>
                <select id="backend-select">
                    <option value="onnx">ONNX (CPU/GPU)</option>
                    <option value="npu">NPU (加速)</option>
                </select>
            </div>
        </div>

        <!-- 上传区域 -->
        <div class="upload-area">
            <label for="file-upload" class="upload-label">
                点击或拖拽图片到此处上传
                <input type="file" id="file-upload" accept="image/*" multiple>
            </label>
            <div class="progress-container" id="upload-progress">
                <div class="progress-bar" id="upload-bar"></div>
                <span class="progress-text" id="upload-text">等待上传...</span>
            </div>
        </div>

        <!-- 处理状态区域 -->
        <div class="status-container" id="status-container" style="display: none;">
            <div class="spinner"></div>
            <p id="process-status">正在处理图片...</p>
            <div class="progress-container" id="fetch-progress">
                <div class="progress-bar" id="fetch-bar"></div>
                <span class="progress-text" id="fetch-text">等待结果...</span>
            </div>
        </div>

        <!-- 结果展示区域 -->
        <div class="results-container" id="results-container">
            <h2>处理结果</h2>
            <div class="results-grid" id="results-grid"></div>
        </div>

        <!-- 图片放大弹窗 -->
        <div id="image-modal" class="modal">
            <span class="close-btn">&times;</span>
            <img class="modal-content" id="modal-image">
            <div id="modal-caption"></div>
        </div>
    </div>

    <script src="/static/js/script.js"></script>
</body>
</html>

样式表(static/css/style.css)

/* 新增配置区域样式 */
.config-area {
    display: flex;
    gap: 20px;
    margin: 20px 0;
    flex-wrap: wrap;
}

.config-item {
    display: flex;
    align-items: center;
    gap: 8px;
}

.config-item select {
    padding: 6px 10px;
    border-radius: 4px;
    border: 1px solid #ddd;
    font-size: 14px;
}

/* 图片放大弹窗样式 */
.modal {
    display: none; /* 默认隐藏 */
    position: fixed;
    z-index: 1000;
    left: 0;
    top: 0;
    width: 100%;
    height: 100%;
    background-color: rgba(0,0,0,0.9);
}

.modal-content {
    margin: auto;
    display: block;
    max-width: 90%;
    max-height: 90%;
    animation: zoom 0.3s;
}

@keyframes zoom {
    from {transform: scale(0)}
    to {transform: scale(1)}
}

.close-btn {
    position: absolute;
    top: 20px;
    right: 30px;
    color: white;
    font-size: 40px;
    font-weight: bold;
    cursor: pointer;
}

#modal-caption {
    margin: auto;
    display: block;
    width: 80%;
    max-width: 700px;
    text-align: center;
    color: #ccc;
    padding: 10px 0;
}

/* 保留之前的样式 */
* {
    box-sizing: border-box;
    margin: 0;
    padding: 0;
    font-family: 'Arial', sans-serif;
}

.container {
    max-width: 1200px;
    margin: 0 auto;
    padding: 20px;
}

h1, h2 {
    color: #333;
    margin: 20px 0;
    text-align: center;
}

.upload-area {
    border: 2px dashed #4CAF50;
    border-radius: 10px;
    padding: 40px 20px;
    text-align: center;
    margin: 20px 0;
    transition: all 0.3s;
}

.upload-area:hover {
    background-color: #f9f9f9;
}

.upload-label {
    cursor: pointer;
    color: #4CAF50;
    font-size: 18px;
    font-weight: bold;
}

#file-upload {
    display: none;
}

.progress-container {
    height: 20px;
    background-color: #f1f1f1;
    border-radius: 10px;
    margin: 15px 0;
    overflow: hidden;
    display: none;
    position: relative;
}

.progress-bar {
    height: 100%;
    background-color: #4CAF50;
    width: 0%;
    transition: width 0.3s ease;
}

.progress-text {
    position: absolute;
    left: 50%;
    transform: translateX(-50%);
    margin-top: -20px;
    font-size: 14px;
    color: #666;
}

.status-container {
    text-align: center;
    margin: 20px 0;
}

.spinner {
    width: 40px;
    height: 40px;
    margin: 0 auto;
    border: 4px solid #f3f3f3;
    border-top: 4px solid #4CAF50;
    border-radius: 50%;
    animation: spin 1s linear infinite;
}

@keyframes spin {
    0% { transform: rotate(0deg); }
    100% { transform: rotate(360deg); }
}

.results-container {
    margin: 30px 0;
}

.results-grid {
    display: grid;
    grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
    gap: 20px;
    margin-top: 20px;
}

.result-card {
    border: 1px solid #ddd;
    border-radius: 8px;
    padding: 15px;
    box-shadow: 0 2px 5px rgba(0,0,0,0.1);
    transition: transform 0.2s;
}

.result-card:hover {
    transform: scale(1.02);
}

.result-image {
    width: 100%;
    max-height: 300px;
    object-fit: contain;
    border-radius: 4px;
    margin-bottom: 10px;
    cursor: zoom-in;
}

.result-info {
    font-size: 14px;
    color: #666;
}

前端交互(static/js/script.js)

document.addEventListener('DOMContentLoaded', () => {
    // DOM元素
    const fileInput = document.getElementById('file-upload');
    const uploadArea = document.querySelector('.upload-area');
    const uploadProgress = document.getElementById('upload-progress');
    const uploadBar = document.getElementById('upload-bar');
    const uploadText = document.getElementById('upload-text');
    const statusContainer = document.getElementById('status-container');
    const processStatus = document.getElementById('process-status');
    const fetchProgress = document.getElementById('fetch-progress');
    const fetchBar = document.getElementById('fetch-bar');
    const fetchText = document.getElementById('fetch-text');
    const resultsGrid = document.getElementById('results-grid');
    const modelSelect = document.getElementById('model-select');
    const backendSelect = document.getElementById('backend-select');
    const imageModal = document.getElementById('image-modal');
    const modalImage = document.getElementById('modal-image');
    const modalCaption = document.getElementById('modal-caption');
    const closeBtn = document.querySelector('.close-btn');

    // 拖拽上传
    uploadArea.addEventListener('dragover', (e) => {
        e.preventDefault();
        uploadArea.style.borderColor = '#4CAF50';
    });

    uploadArea.addEventListener('dragleave', () => {
        uploadArea.style.borderColor = '#4CAF50';
    });

    uploadArea.addEventListener('drop', (e) => {
        e.preventDefault();
        uploadArea.style.borderColor = '#4CAF50';
        if (e.dataTransfer.files.length) {
            fileInput.files = e.dataTransfer.files;
            handleFiles(fileInput.files);
        }
    });

    // 选择文件上传
    fileInput.addEventListener('change', () => {
        handleFiles(fileInput.files);
    });

    // 处理文件上传
    async function handleFiles(files) {
        if (files.length === 0) return;

        // 显示上传进度
        uploadProgress.style.display = 'block';
        uploadBar.style.width = '0%';
        uploadText.textContent = '0%';

        for (let i = 0; i < files.length; i++) {
            const file = files[i];
            const formData = new FormData();
            formData.append('image', file);

            // 上传文件(带进度)
            const xhr = new XMLHttpRequest();
            xhr.open('POST', '/upload');

            xhr.upload.addEventListener('progress', (e) => {
                if (e.lengthComputable) {
                    const percent = Math.round((e.loaded / e.total) * 100);
                    uploadBar.style.width = `${percent}%`;
                    uploadText.textContent = `上传中: ${percent}%`;
                }
            });

            xhr.onload = async () => {
                if (xhr.status === 200) {
                    const res = JSON.parse(xhr.responseText);
                    if (res.status === 'success') {
                        // 上传完成,开始处理(传递模型和后端参数)
                        await processImage(res.filename, i + 1, files.length);
                    }
                }
            };

            xhr.send(formData);
        }
    }

    // 处理图片
    async function processImage(filename, current, total) {
        // 获取当前选择的模型和后端
        const model = modelSelect.value;
        const backend = backendSelect.value;

        // 显示处理状态
        statusContainer.style.display = 'block';
        processStatus.textContent = `处理中 (${current}/${total}) - 模型: ${model}, 后端: ${backend}`;
        fetchProgress.style.display = 'block';
        fetchBar.style.width = '0%';
        fetchText.textContent = '处理中...';

        // 模拟取回进度
        const progressInterval = setInterval(() => {
            const currentWidth = parseInt(fetchBar.style.width) || 0;
            if (currentWidth < 90) {
                fetchBar.style.width = `${currentWidth + 10}%`;
                fetchText.textContent = `处理中: ${currentWidth + 10}%`;
            }
        }, 500);

        // 请求后端处理(传递模型和后端参数)
        try {
            const response = await fetch(`/process/${filename}`, {
                method: 'POST',
                headers: { 'Content-Type': 'application/json' },
                body: JSON.stringify({ model, backend })
            });
            const res = await response.json();

            clearInterval(progressInterval);
            if (res.status === 'success') {
                // 处理完成
                fetchBar.style.width = '100%';
                fetchText.textContent = '处理完成';

                // 添加结果到页面
                addResultToGrid(res.result_path, res.process_time, model, backend);

                // 全部完成后隐藏状态框
                if (current === total) {
                    setTimeout(() => {
                        statusContainer.style.display = 'none';
                        uploadProgress.style.display = 'none';
                    }, 1000);
                }
            } else {
                alert(`处理失败: ${res.msg}`);
            }
        } catch (e) {
            clearInterval(progressInterval);
            alert(`请求失败: ${e.message}`);
        }
    }

    // 添加结果到网格
    function addResultToGrid(imagePath, processTime, model, backend) {
        const resultCard = document.createElement('div');
        resultCard.className = 'result-card';
        resultCard.innerHTML = `
            <img src="${imagePath}" class="result-image" alt="检测结果" data-path="${imagePath}">
            <div class="result-info">
                <p>模型: ${model}</p>
                <p>后端: ${backend}</p>
                <p>处理时间: ${processTime} 秒</p>
            </div>
        `;
        resultsGrid.prepend(resultCard);

        // 绑定图片点击放大事件
        const img = resultCard.querySelector('.result-image');
        img.addEventListener('click', () => {
            imageModal.style.display = 'block';
            modalImage.src = img.dataset.path;
            modalCaption.textContent = `模型: ${model}, 后端: ${backend}`;
        });
    }

    // 关闭放大弹窗
    closeBtn.addEventListener('click', () => {
        imageModal.style.display = 'none';
    });

    // 点击弹窗外部关闭
    imageModal.addEventListener('click', (e) => {
        if (e.target === imageModal) {
            imageModal.style.display = 'none';
        }
    });
});

模型格式

ONNX/CIX 格式模型直接从 modelscope/ai_model_hub_25_Q1_2 中取,重命名即可。

运行 Demo

(venv) topgear@radxa-orion-o6:~/cix/demo$ python app.py
2025-11-12 23:09:48.362913436 [W:onnxruntime:Default, device_discovery.cc:164 DiscoverDevicesForPlatform] GPU device discovery failed: device_discovery.cc:89 ReadFileContents Failed to open file: "/sys/class/drm/card3/device/vendor"
 * Serving Flask app 'app'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.6.47:5000
Press CTRL+C to quit

使用浏览器访问

如下图:
image.png

测试结果

image.png

推荐阅读
关注数
0
文章数
5
目录
极术微信服务号
关注极术微信号
实时接收点赞提醒和评论通知
安谋科技学堂公众号
关注安谋科技学堂
实时获取安谋科技及 Arm 教学资源
安谋科技招聘公众号
关注安谋科技招聘
实时获取安谋科技中国职位信息