女人自慰AV免费观看内涵网,日韩国产剧情在线观看网址,神马电影网特片网,最新一级电影欧美,在线观看亚洲欧美日韩,黄色视频在线播放免费观看,ABO涨奶期羡澄,第一导航fulione,美女主播操b

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

英特爾的開發(fā)板評測

英特爾物聯(lián)網(wǎng) ? 來源:英特爾物聯(lián)網(wǎng) ? 2025-01-24 09:37 ? 次閱讀

作者:隋曉金

收到英特爾的開發(fā)板-小挪吒,正好手中也有oak相機(jī),反正都是 OpenVINO一套玩意,進(jìn)行評測一下,竟然默認(rèn)是個(gè)Windows系統(tǒng),刷機(jī)成Linux系統(tǒng)比較方便。

bcd806e4-d969-11ef-9310-92fbcf53809c.jpg

bcfdc334-d969-11ef-9310-92fbcf53809c.png

bd145590-d969-11ef-9310-92fbcf53809c.jpg

bd376d0a-d969-11ef-9310-92fbcf53809c.jpg

bd57ae26-d969-11ef-9310-92fbcf53809c.jpg

我們先刷個(gè)刷成Linux系統(tǒng),測試比較方便,雖然Windows+Python代碼也可以開發(fā),搞點(diǎn)難度的Ubuntu+&++推理,同時(shí)還為了測試灰仔的ncnn,勉為其難,把系統(tǒng)刷掉,系統(tǒng)我們選擇英特爾適配的22.04即可,確保和CPU的型號相同即可:

bd71af24-d969-11ef-9310-92fbcf53809c.png

使用motrix的下載,速度較快。然后使用rufus進(jìn)行刻錄優(yōu)盤進(jìn)行sd卡刻入,系統(tǒng)變成linux,就可以遠(yuǎn)程設(shè)置一ssh;系統(tǒng)界面如上。

系統(tǒng)需要安裝官方的OpenVINO組件,使用英特爾端進(jìn)行OpenVINO模型推理,當(dāng)然也可使用ncnn/mnn/onnx,但原聲組件更友好一些。

bd87494c-d969-11ef-9310-92fbcf53809c.png

先配置oak的環(huán)境,適配深度相機(jī)推理和測距,然后在開發(fā)板上推理關(guān)鍵點(diǎn)檢測推理,演繹一下測試開發(fā)版性能,正好相機(jī)端的芯片也是英特爾使用OpenVINO框架,下面操作是開發(fā)板上配置一下相機(jī)使用的庫環(huán)境:

ubuntu@ubuntu:~$ wget https://gitee.com/oakchina/depthai-core/releases/download/v2.28.0/depthai_2.28.0_amd64.deb
ubuntu@ubuntu:~$ sudo apt install -f
ubuntu@ubuntu:~$ sudo dpkg -i depthai_2.28.0_amd64.deb
(Reading database ... 164136 files and directories currently installed.)
Preparing to unpack depthai_2.28.0_amd64.deb ...
Unpacking depthai (2.28.0) over (2.28.0) ...
Setting up depthai (2.28.0) ...;

配置一下OpenVINO ,參考手冊。這個(gè)主要后面寫代碼和轉(zhuǎn)模型用。但是我用C++寫代碼,搞點(diǎn)有難度的事情。

https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2022_3_2&ENVIRONMENT=DEV_TOOLS&OP_SYSTEM=LINUX&DISTRIBUTION=PIP;

鏈接,下面操作仍然在開發(fā)板上執(zhí)行:

pip install openvino-dev==2022.3.2
storage.openvinotoolkit.org


ubuntu@ubuntu:~$ wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ sudo tar xf l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64.tgz.tgz.sha256.tgz -C /opt/intel/
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ mv l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64 openvino_2023
ubuntu@ubuntu:~$ mv openvino_2023/ /opt/intel/
ubuntu@ubuntu:~$ cd /opt/intel/
ubuntu@ubuntu:~$ cd openvino_2023/
ubuntu@ubuntu:/opt/intel/openvino_2023$ vim ~/.bashrc
source /opt/intel/openvino_2023/setupvars.sh
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2023/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2023/install_dependencies$ sudo -E ./install_openvino_dependencies.sh

下面操作在自己的宿主機(jī)器上執(zhí)行,主要發(fā)現(xiàn)在開發(fā)板上的OpenVINO無法轉(zhuǎn)相機(jī)的blob模型,但是這個(gè)低版本的OpenVINO庫又無法開發(fā)板,應(yīng)為2021.4支持系統(tǒng)ubuntu20.04版本和一下,開發(fā)板的版本是22.04系統(tǒng)版本過高。

先搞一下yolov5lite,這個(gè)官方給了方法和例子,簡要敘述和附上,這我是在自己的宿主主機(jī)上做的ubuntu20.04 因?yàn)楝F(xiàn)在開發(fā)板版本過高,擔(dān)心它的OpenVINO環(huán)境轉(zhuǎn)的blob不一定能在oak相機(jī)上運(yùn)行。

ubuntu@ubuntu:~/Downloads$ axel -n 100 https://registrationcenter-download.intel.com/akdlm/IRC_NAS/18096/l_openvino_toolkit_p_2021.4.689.tgz
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_p_2021.4.689.tgz 
ubuntu@ubuntu:~/Downloads$ cd l_openvino_toolkit_p_2021.4.689/
ubuntu@ubuntu:~/Downloads/l_openvino_toolkit_p_2021.4.689$ sudo ./install_GUI.sh 
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2021/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2021/install_dependencies$ sudo -E ./install_openvino_dependencies.sh 
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ sudo vim ~/.bashrc 

在末尾添加:

source /opt/intel/openvino_2021/bin/setupvars.sh
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ source ~/.bashrc 
[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ cd /opt//intel/openvino_2021/deployment_tools/model_optimizer//install_prerequisites/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites$ sudo ./install_prerequisites.sh

下載模型,進(jìn)行轉(zhuǎn)模型:

ubuntu@ubuntu:~$ git clone https://github.com/ppogg/YOLOv5-Lite

模型代碼,參考o(jì)ak官方代碼:

bd996a3c-d969-11ef-9310-92fbcf53809c.png

轉(zhuǎn)onnx模型和轉(zhuǎn)OpenVINO模型 export_onnx.py見官方參考:

ubuntu@ubuntu:~/YOLOv5-Lite$ pip3 install -r requirements.txt
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 export_onnx.py -w v5lite-e.pt -imgsz 640
Namespace(blob=False, convert_tool='blobconverter', img_size=[640, 640], 
input_model=PosixPath('/home/ubuntu/YOLOv5-Lite/v5lite-e.pt'), name='v5lite-e', 
opset=12, output_dir=PosixPath('/home/ubuntu/YOLOv5-Lite'), shaves=6, 
spatial_detection=False)
[18:12:38] INFO   YOLOv5  v1.5-16-g9d649a6 torch 2.4.1+cu121 CPU      
                                        
Fusing layers... 
[18:12:41] INFO   Model Summary: 167 layers, 781205 parameters, 0 gradients, 
          2.9 GFLOPS                         
 
      INFO   Starting ONNX export with onnx 1.16.1...          
      INFO   Starting to simplify ONNX...                
      INFO   ONNX export success, saved as:               
              /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx       
      INFO   anchors:                          
              [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0,  
          62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 
          326.0]                           
      INFO   anchor_masks:                        
              {'side80': [0, 1, 2], 'side40': [3, 4, 5], 'side20':
          [6, 7, 8]}                         
      INFO   Anchors data export success, saved as:           
              /home/ubuntu/YOLOv5-Lite/v5lite-e.json       
      INFO   Export complete (3.61s).                  
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
Model Optimizer arguments:
Common parameters:
  - Path to the Input Model:   /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx
  - Path for generated IR:   /home/ubuntu/YOLOv5-Lite/saved/FP16
  - IR output name:   v5lite-e
  - Log level:   ERROR
  - Batch:   Not specified, inherited from the model
  - Input layers:   Not specified, inherited from the model
  - Output layers:   Not specified, inherited from the model
  - Input shapes:   [1,3,640,640]
  - Mean values:   [0,0,0]
  - Scale values:   [255.0,255.0,255.0]
  - Scale factor:   Not specified
  - Precision of IR:   FP16
  - Enable fusing:   True
  - Enable grouped convolutions fusing:   True
  - Move mean values to preprocess section:   None
  - Reverse input channels:   False
ONNX specific parameters:
  - Inference Engine found in:   /opt/intel/openvino_2021/python/python3.8/openvino
Inference Engine version:   2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version:   2021.4.1-3926-14e67d86634-releases/2021/4
[ WARNING ] 
Detected not satisfied dependencies:
  networkx: installed: 3.1, required: ~= 2.5
  numpy: installed: 1.23.5, required: < 1.20
 
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_onnx.sh
Note that install_prerequisites scripts may install additional components.
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/onnx/parameter_ext.py:20: DeprecationWarning: `mapping.TENSOR_TYPE_TO_NP_TYPE` is now deprecated and will be removed in a future release.To silence this warning, please use `helper.tensor_dtype_to_np_dtype` instead.
 ?'data_type': TENSOR_TYPE_TO_NP_TYPE[t_type.elem_type]
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/analysis/boolean_input.py:13: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
 ?nodes = graph.get_op_nodes(op='Parameter', data_type=np.bool)
/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/front/common/partial_infer/concat.py:36: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
 ?mask = np.zeros_like(shape, dtype=np.bool)
[ WARNING ] ?Const node '/model.8/Resize/Add_input_port_1/value338417277' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ] ?Const node '/model.12/Resize/Add_input_port_1/value341817280' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ] ?Changing Const node '/model.8/Resize/Add_input_port_1/value338418006' data type from float16 to  for Elementwise operation
[ WARNING ] Changing Const node '/model.12/Resize/Add_input_port_1/value341817580' data type from float16 to  for Elementwise operation
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
[ SUCCESS ] Total execution time: 10.69 seconds. 
[ SUCCESS ] Memory consumed: 104 MB. 
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-4-LTS&content=upg_all&medium=organic or on the GitHub*
ubuntu@ubuntu:~/YOLOv5-Lite$ 

轉(zhuǎn)換模型

ubuntu@ubuntu:~$ find . -name "mo_onnx.py"
./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py
ubuntu@ubuntu:~$ python3 ./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
[ WARNING ] Use of deprecated cli option --data_type detected. Option use in the following releases will be fatal.
Check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg_all&medium=organic or on https://github.com/openvinotoolkit/openvino
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
ubuntu@ubuntu:~$ pip3 install blobconverter
然后站blob


[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:~/YOLOv5-Lite$ cd /opt/intel/openvino_2021/deployment_tools/tools
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ sudo chmod 777 compile_tool/
[sudo] password for ubuntu: 
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ cd compile_tool/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ./compile_tool -m /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4
Inference Engine: 
  IE version ......... 2021.4.1
  Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
 
Network inputs:
  images : U8 / NCHW
Network outputs:
  output1_yolov5 : FP16 / NCHW
  output2_yolov5 : FP16 / NCHW
  output3_yolov5 : FP16 / NCHW
[Warning][VPU][Config] Deprecated option was used : VPU_MYRIAD_PLATFORM
Done. LoadNetwork time elapsed: 6529 ms
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ls
compile_tool README.md v5lite-e.blob

導(dǎo)出模型,先在oak相機(jī)上試試,這個(gè)整個(gè)模型都是在oak相機(jī)端進(jìn)行推理和測距,只能說這個(gè)開發(fā)板是支持oak這種深度相機(jī)使用的。

bdba9ffe-d969-11ef-9310-92fbcf53809c.jpg

接著,來修改我們的代碼,將模型放在開發(fā)板上使用OpenVINO推理,將測距功能仍然保持相機(jī)端推理,下面是使用clion遠(yuǎn)程調(diào)用開發(fā)板進(jìn)行編譯的代碼,將深度相機(jī)OAK插在哪吒開發(fā)板的usb接口,將英特爾開發(fā)板插上顯示器,然后進(jìn)行相機(jī)調(diào)用,后續(xù)上傳GitHub。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加頭文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#鏈接Opencv庫
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv )
 

main.cpp

#include 
// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"
 
/*
The code is the same as for Tiny-yolo-V3, the only difference is the blob file.
The blob was compiled following this tutorial: https://github.com/TNTWEN/OpenVINO-YOLOV4
*/
 
 
static const std::vector labelMap = {
        "person",        "bicycle",      "car",           "motorbike",     "aeroplane",   "bus",         "train",       "truck",        "boat",
        "traffic light", "fire hydrant", "stop sign",     "parking meter", "bench",       "bird",        "cat",         "dog",          "horse",
        "sheep",         "cow",          "elephant",      "bear",          "zebra",       "giraffe",     "backpack",    "umbrella",     "handbag",
        "tie",           "suitcase",     "frisbee",       "skis",          "snowboard",   "sports ball", "kite",        "baseball bat", "baseball glove",
        "skateboard",    "surfboard",    "tennis racket", "bottle",        "wine glass",  "cup",         "fork",        "knife",        "spoon",
        "bowl",          "banana",       "apple",         "sandwich",      "orange",      "broccoli",    "carrot",      "hot dog",      "pizza",
        "donut",         "cake",         "chair",         "sofa",          "pottedplant", "bed",         "diningtable", "toilet",       "tvmonitor",
        "laptop",        "mouse",        "remote",        "keyboard",      "cell phone",  "microwave",   "oven",        "toaster",      "sink",
        "refrigerator",  "book",         "clock",         "vase",          "scissors",    "teddy bear",  "hair drier",  "toothbrush"};
 
static std::atomic syncNN{true};
 
 
int main() {
    // Create pipeline
    dai::Pipeline pipeline;
 
    // Define sources
    auto camRgb = pipeline.create();
    auto monoLeft = pipeline.create();
    auto monoRight = pipeline.create();
    auto stereo = pipeline.create();
    auto spatialDataCalculator = pipeline.create();
 
 
    // Properties
    camRgb->setPreviewSize(640, 640);
    camRgb->setBoardSocket(dai::RGB);
    camRgb->setResolution(dai::THE_1080_P);
    camRgb->setInterleaved(false);
    camRgb->setColorOrder(dai::RGB);
    camRgb->setPreviewKeepAspectRatio(false); //將調(diào)整視頻大小以適應(yīng)預(yù)覽大小,對齊
 
    monoLeft->setBoardSocket(dai::LEFT);
    monoLeft->setResolution(dai::THE_720_P);
    monoRight->setBoardSocket(dai::RIGHT);
    monoRight->setResolution(dai::THE_720_P);
 
 
    stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
    stereo->setLeftRightCheck(true);
    stereo->setDepthAlign(dai::RGB);
    stereo->setExtendedDisparity(true);
 
    dai::Point2f topLeft(0.4f, 0.4f);
    dai::Point2f bottomRight(0.6f, 0.6f);
 
    dai::SpatialLocationCalculatorConfigData config;
    config.depthThresholds.lowerThreshold = 100;
    config.depthThresholds.upperThreshold = 10000;
    config.roi = dai::Rect(topLeft, bottomRight);
 
    spatialDataCalculator->initialConfig.addROI(config);
    spatialDataCalculator->inputConfig.setWaitForMessage(false);
 
 
    // Network specific settings
    auto detectionNetwork = pipeline.create();
    detectionNetwork->setBlob("../v5lite-e.blob");
    detectionNetwork->setConfidenceThreshold(0.5);
    //Yolo specific parameters
    detectionNetwork->setNumClasses(80);
    detectionNetwork->setCoordinateSize(4);
    detectionNetwork->setAnchors({10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326});
    detectionNetwork->setAnchorMasks({{{"side80",{0, 1, 2}},{"side40",{3, 4, 5}},{"side20",{6, 7, 8}}}});
    detectionNetwork->setIouThreshold(0.5);
 
    // rgb輸出
    auto xoutRgb = pipeline.create();
    xoutRgb->setStreamName("rgb");
 
    // depth輸出
    auto xoutDepth = pipeline.create();
    xoutDepth->setStreamName("depth");
 
    // 測距模塊數(shù)據(jù)輸出
    auto xoutSpatialData = pipeline.create();
    xoutSpatialData->setStreamName("spatialData");
 
    // 測距模塊配置輸入
    auto xinSpatialCalcConfig = pipeline.create();
    xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
 
 
    // Linking  preview 畫布 video 實(shí)時(shí)分辨率
    camRgb->video.link(xoutRgb->input); //顯示用video
    camRgb->preview.link(detectionNetwork->input); //推理用preview
    monoLeft->out.link(stereo->left);
    monoRight->out.link(stereo->right);
 
    spatialDataCalculator->passthroughDepth.link(xoutDepth->input);
    stereo->depth.link(spatialDataCalculator->inputDepth);
 
    spatialDataCalculator->out.link(xoutSpatialData->input);
    xinSpatialCalcConfig->out.link(spatialDataCalculator->inputConfig);
 
 
    // output
    auto xlinkParseOut = pipeline.create();
    xlinkParseOut->setStreamName("parseOut");
 
    auto xlinkoutOut = pipeline.create();
    xlinkoutOut->setStreamName("out");
 
    auto xlinkPassthroughOut = pipeline.create();
    xlinkPassthroughOut->setStreamName("passthrough");
 
 
    detectionNetwork->out.link(xlinkParseOut->input);
    detectionNetwork->passthrough.link(xlinkPassthroughOut->input);
 
 
    // Connect to device and start pipeline
    dai::Device device;
 
    device.setIrLaserDotProjectorBrightness(1000);
    device.setIrFloodLightBrightness(0);
    device.startPipeline(pipeline);
 
    // Output queues will be used to get the rgb frames and nn data from the outputs defined above
    auto detectQueue = device.getOutputQueue("parseOut",8,false);
    auto passthQueue = device.getOutputQueue("passthrough", 8, false);
    auto depthQueue = device.getOutputQueue("depth", 8, false);
    auto spatialCalcQueue = device.getOutputQueue("spatialData", 8, false);
    auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig", 8, false);
    auto rgbQueue = device.getOutputQueue("rgb", 8, false);
 
    bool printOutputLayersOnce = true;
    auto color = cv::Scalar(0,255,0);
 
 
    std::vector detections;
    auto startTime = std::now();
    int counter = 0;
    float fps = 0;
    auto color2 = cv::Scalar(255, 255, 255);
    cv::Scalar color1 = cv::Scalar(0, 0, 255);
 
    while (true) {
        counter++;
        auto currentTime = std::now();
        auto elapsed = std::duration_cast>(currentTime - startTime);
        if(elapsed > std::seconds(1)) {
            fps = counter / elapsed.count();
            counter = 0;
            startTime = currentTime;
        }
 
        std::shared_ptr inRgb = rgbQueue->get();
        std::shared_ptr inDepth = depthQueue->get();
        std::shared_ptr inDet = detectQueue->get();
        std::shared_ptr ImgFrame = passthQueue->get();
 
        cv::Mat frame = inRgb->getCvFrame();
        cv::Mat src = ImgFrame->getCvFrame();
 
        cv::Mat depthFrameColor;
        cv::Mat depthFrame = inDepth->getFrame();
        cv::normalize(depthFrame, depthFrameColor, 255, 0, cv::NORM_INF, CV_8UC1);
        cv::equalizeHist(depthFrameColor, depthFrameColor);
        cv::applyColorMap(depthFrameColor, depthFrameColor, cv::COLORMAP_HOT);
 
        inDet = detectQueue->get();
        if(inDet) {
            detections = inDet->detections;
            for(auto& detection : detections) {
                int x1 = detection.xmin * src.cols;
                int y1 = detection.ymin * src.rows;
                int x2 = detection.xmax * src.cols;
                int y2 = detection.ymax * src.rows;
 
                uint32_t labelIndex = detection.label;
                std::string labelStr = std::to_string(labelIndex);
                if(labelIndex < labelMap.size()) {
                    labelStr = labelMap[labelIndex];
                }
                cv::putText(src, labelStr, cv::Point(x1 + 10, y1 + 20), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                std::stringstream confStr;
                confStr << std::fixed << std::setprecision(2) << detection.confidence * 100;
                cv::putText(src, confStr.str(), cv::Point(x1 + 10, y1 + 40), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                cv::rectangle(src, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
 
                // 1920*1080
                //cv::rectangle(depthFrameColor, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
                int top_left_x = detection.xmin * frame.cols;
                int top_left_y = detection.ymin * frame.rows;
                int bottom_right_x = detection.xmax * frame.cols;
                int bottom_right_y = detection.ymax * frame.rows;
 
                // 最值限定
                top_left_x = top_left_x < 0 ? 0 : top_left_x;
                bottom_right_x = bottom_right_x > frame.cols - 1 ? frame.cols - 1 : bottom_right_x;
                top_left_y = top_left_y < 0 ? 0 : top_left_y;
                bottom_right_y = bottom_right_y > frame.rows - 1 ? frame.rows - 1 : bottom_right_y;
 
                topLeft.x = top_left_x;
                topLeft.y = top_left_y;
                bottomRight.x = bottom_right_x;
                bottomRight.y = bottom_right_y;
 
                // 測距模塊推送實(shí)際像素大小的ROI
                config.roi = dai::Rect(topLeft, bottomRight);
                dai::SpatialLocationCalculatorConfig cfg;
                cfg.addROI(config);
                spatialCalcConfigInQueue->send(cfg);
                std::vector spatialData = spatialCalcQueue->get()->getSpatialLocations();
 
                for (auto &depthData : spatialData) {
                    auto roi = depthData.config.roi;
                    roi = roi.denormalize(depthFrameColor.cols, depthFrameColor.rows);
                    auto xmin = (int) roi.topLeft().x;
                    auto ymin = (int) roi.topLeft().y;
                    auto xmax = (int) roi.bottomRight().x;
                    auto ymax = (int) roi.bottomRight().y;
 
                    // 最值限定
//                    xmin = xmin < 0 ? 0 : xmin;
//                    xmax = xmax > frame.cols - 1 ? frame.cols - 1 : xmax;
//                    ymin = ymin < 0 ? 0 : ymin;
//                    ymax = ymax > frame.rows - 1 ? frame.rows - 1 : ymax;
 
                    auto coords = depthData.spatialCoordinates;
                    auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
                    auto fontType = cv::FONT_HERSHEY_TRIPLEX;
 
                    std::stringstream rgb_depthX, depthX, rgb_depthX_;
                    rgb_depthX << "X: " << (int) coords.x << " mm";
                    rgb_depthX_.precision(2);
                    rgb_depthX_ << "dis: " << std::fixed << static_cast(distance) << " mm";
 
                    cv::rectangle(frame,
                                  cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                                  color,
                                  fontType);
 
                    cv::putText(frame, rgb_depthX_.str(), cv::Point(xmin + 10, ymin - 20),
                                fontType,
                                0.5, color1);
 
                    cv::putText(frame, rgb_depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthY, depthY;
                    rgb_depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(frame, rgb_depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthZ, depthZ;
                    rgb_depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(frame, rgb_depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType,
                                0.5, color1);
 
 
                    cv::rectangle(depthFrameColor,
                            cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                            color,
                            fontType);
                    depthX << "X: " << (int) coords.x << " mm";
                    cv::putText(depthFrameColor, depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType, 0.5, color1);
                    depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(depthFrameColor, depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType, 0.5, color1);
                    depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(depthFrameColor, depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType, 0.5, color1);
                }
            }
 
            std::stringstream fpsStr;
            fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
//            printf("fps %f
",fps);
            cv::putText(src, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
            cv::putText(frame, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
 
            // Show the frame
//            cv::imshow("src", src);
            cv::imshow("frame", frame);
            cv::imwrite("frame.jpg", frame);
//            cv::imshow("depth", depthFrameColor);
            int key = cv::waitKey(1);
            if(key == 'q' || key == 'Q' || key == 27) {
                return 0;
            }
        }
    }
}

bdd81016-d969-11ef-9310-92fbcf53809c.jpg

然后將在相機(jī)端的推理代碼踢掉,使用本地開發(fā)板哪吒進(jìn)行推理,然后整體替換OpenVINO推理方式:

(1)先改個(gè)用編解碼的方式獲取相機(jī),測距,使用CPU進(jìn)行純h264解碼,純CPU解碼30幀左右,看樣子還行,這小板子的CPU軟解看著還湊合。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加頭文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#鏈接Opencv庫
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv -lavformat -lavcodec -lswscale -lavutil -lz)

main.cpp

#include 
#include 
#include 
#include 
#include 
#include 
#include 
extern "C"
{
#include 
#include 
#include 
#include 
}
 
 
#include "utility.hpp"
 
#include "depthai/depthai.hpp"
 
using namespace std::chrono;
 
int main(int argc, char **argv) {
  dai::Pipeline pipeline;
  //定義
  auto cam = pipeline.create();
  cam->setBoardSocket(dai::RGB);
  cam->setResolution(dai::THE_1080_P);
  cam->setVideoSize(1920, 1080);
  cam->setFps(30);
  auto Encoder = pipeline.create();
  Encoder->setDefaultProfilePreset(cam->getVideoSize(), cam->getFps(),
                   dai::H265_MAIN);
 
 
  cam->video.link(Encoder->input);
 
  auto monoLeft = pipeline.create();
  auto monoRight = pipeline.create();
  auto stereo = pipeline.create();
  auto spatialLocationCalculator = pipeline.create();
 
  auto xoutDepth = pipeline.create();
  auto xoutSpatialData = pipeline.create();
  auto xinSpatialCalcConfig = pipeline.create();
  auto xoutRgb = pipeline.create();
  xoutDepth->setStreamName("depth");
  xoutSpatialData->setStreamName("spatialData");
  xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
  xoutRgb->setStreamName("rgb");
 
  monoLeft->setResolution(dai::THE_400_P);
  monoLeft->setBoardSocket(dai::LEFT);
  monoRight->setResolution(dai::THE_400_P);
  monoRight->setBoardSocket(dai::RIGHT);
 
  stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
  stereo->setLeftRightCheck(true);
  stereo->setExtendedDisparity(true);
  spatialLocationCalculator->inputConfig.setWaitForMessage(false);
 
 
  dai::SpatialLocationCalculatorConfigData config;
  config.depthThresholds.lowerThreshold = 200;
  config.depthThresholds.upperThreshold = 10000;
  config.roi = dai::Point2f( 0.1, 0.45), dai::Point2f(( 1) * 0.1, 0.55));
  spatialLocationCalculator->initialConfig.addROI(config);
 
  // Linking
  monoLeft->out.link(stereo->left);
  monoRight->out.link(stereo->right);
 
  spatialLocationCalculator->passthroughDepth.link(xoutDepth->input);
  stereo->depth.link(spatialLocationCalculator->inputDepth);
 
  spatialLocationCalculator->out.link(xoutSpatialData->input);
  xinSpatialCalcConfig->out.link(spatialLocationCalculator->inputConfig);
 
 
  //定義輸出
  auto xlinkoutpreviewOut = pipeline.create();
  xlinkoutpreviewOut->setStreamName("out");
 
  Encoder->bitstream.link(xlinkoutpreviewOut->input);
 
 
  //結(jié)構(gòu)推送相機(jī)
  dai::Device device(pipeline);
  device.setIrLaserDotProjectorBrightness(1000);
 
  //取幀顯示
  auto outqueue = device.getOutputQueue("out", cam->getFps(), false);//maxsize 代表緩沖數(shù)據(jù)
  auto depthQueue = device.getOutputQueue("depth", 4, false);
  auto spatialCalcQueue = device.getOutputQueue("spatialData", 4, false);
 
  //auto videoFile = std::ofstream("video.h265", std::binary);
 
 
  int width = 1920;
  int height = 1080;
  AVCodec *pCodec = avcodec_find_decoder(AV_CODEC_ID_H265);
  AVCodecContext *pCodecCtx = avcodec_alloc_context3(pCodec);
  int ret = avcodec_open2(pCodecCtx, pCodec, NULL);
  if (ret < 0) {//打開解碼器
 ? ? ? ?printf("Could not open codec.
");
 ? ? ? ?return -1;
 ? ?}
 ? ?AVFrame *picture = av_frame_alloc();
 ? ?picture->width = width;
  picture->height = height;
  picture->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(picture, 1);
  if (ret < 0) {
 ? ? ? ?printf("av_frame_get_buffer error
");
 ? ? ? ?return -1;
 ? ?}
 ? ?AVFrame *pFrame = av_frame_alloc();
 ? ?pFrame->width = width;
  pFrame->height = height;
  pFrame->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(pFrame, 1);
  if (ret < 0) {
 ? ? ? ?printf("av_frame_get_buffer error
");
 ? ? ? ?return -1;
 ? ?}
 ? ?AVFrame *pFrameRGB = av_frame_alloc();
 ? ?pFrameRGB->width = width;
  pFrameRGB->height = height;
  pFrameRGB->format = AV_PIX_FMT_RGB24;
  ret = av_frame_get_buffer(pFrameRGB, 1);
  if (ret < 0) {
 ? ? ? ?printf("av_frame_get_buffer error
");
 ? ? ? ?return -1;
 ? ?}
 
 
 ? ?int picture_size = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, width, height,
 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?1);//計(jì)算這個(gè)格式的圖片,需要多少字節(jié)來存儲(chǔ)
 ? ?uint8_t *out_buff = (uint8_t *) av_malloc(picture_size * sizeof(uint8_t));
 ? ?av_image_fill_arrays(picture->data, picture->linesize, out_buff, AV_PIX_FMT_YUV420P, width,
             height, 1);
  //這個(gè)函數(shù) 是緩存轉(zhuǎn)換格式,可以不用 以為上面已經(jīng)設(shè)置了AV_PIX_FMT_YUV420P
  SwsContext *img_convert_ctx = sws_getContext(width, height, AV_PIX_FMT_YUV420P,
                         width, height, AV_PIX_FMT_RGB24, 4,
                         NULL, NULL, NULL);
  AVPacket *packet = av_packet_alloc();
 
  auto startTime = steady_clock::now();
  int counter = 0;
  float fps = 0;
  auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig");
  while (true) {
    counter++;
    auto currentTime = steady_clock::now();
    auto elapsed = duration_cast>(currentTime - startTime);
    if (elapsed > seconds(1)) {
      fps = counter / elapsed.count();
      counter = 0;
      startTime = currentTime;
    }
 
 
 
 
    auto h265Packet = outqueue->get();
 
 
    //videoFile.write((char *) (h265Packet->getData().data()), h265Packet->getData().size());
 
    packet->data = (uint8_t *) h265Packet->getData().data();  //這里填入一個(gè)指向完整H264數(shù)據(jù)幀的指針
    packet->size = h265Packet->getData().size();    //這個(gè)填入H265 數(shù)據(jù)幀的大小
    packet->stream_index = 0;
    ret = avcodec_send_packet(pCodecCtx, packet);
    if (ret < 0) {
 ? ? ? ? ? ?printf("avcodec_send_packet 
");
 ? ? ? ? ? ?continue;
 ? ? ? ?}
 ? ? ? ?av_packet_unref(packet);
 ? ? ? ?int got_picture = avcodec_receive_frame(pCodecCtx, pFrame);
 ? ? ? ?av_frame_is_writable(pFrame);
 ? ? ? ?if (got_picture < 0) {
 ? ? ? ? ? ?printf("avcodec_receive_frame 
");
 ? ? ? ? ? ?continue;
 ? ? ? ?}
 
 ? ? ? ?sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
         height,
         pFrameRGB->data, pFrameRGB->linesize);
 
 
    cv::Mat mRGB(cv::Size(width, height), CV_8UC3);
    mRGB.data = (unsigned char *) pFrameRGB->data[0];
    cv::Mat mBGR;
    cv::cvtColor(mRGB, mBGR, cv::COLOR_RGB2BGR);
    std::stringstream fpsStr;
    fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
 ? ? ? ?printf("fps %f
",fps);
 ? ? ? ?cv::putText(mBGR, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 0.4,
 ? ? ? ? ? ? ? ? ? ?cv::Scalar(0, 255, 0));
 
 
 ? ? ? ?config.roi = dai::Point2f(3 * 0.1, 0.45), dai::Point2f((3 + 1) * 0.1, 0.55));
 ? ? ? ?dai::SpatialLocationCalculatorConfig cfg;
 ? ? ? ?cfg.addROI(config);
 ? ? ? ?spatialCalcConfigInQueue->send(cfg);
 
    // auto inDepth = depthQueue->get();
    //cv::Mat depthFrame = inDepth->getFrame(); // depthFrame values are in millimeters
 
 
    auto spatialData = spatialCalcQueue->get()->getSpatialLocations();
    for(auto depthData : spatialData) {
      auto roi = depthData.config.roi;
      roi = roi.denormalize(mBGR.cols, mBGR.rows);
 
      auto xmin = static_cast(roi.topLeft().x);
      auto ymin = static_cast(roi.topLeft().y);
      auto xmax = static_cast(roi.bottomRight().x);
      auto ymax = static_cast(roi.bottomRight().y);
 
      auto coords = depthData.spatialCoordinates;
      auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
      auto color = cv::Scalar(0, 200, 40);
      auto fontType = cv::FONT_HERSHEY_TRIPLEX;
      cv::rectangle(mBGR, cv::Point(xmin, ymin), cv::Point(xmax, ymax)), color);
      std::stringstream depthDistance;
      depthDistance.precision(2);
      depthDistance << std::fixed << static_cast(distance / 1000.0f) << "m";
 ? ? ? ? ? ?cv::putText(mBGR, depthDistance.str(), cv::Point(xmin + 10, ymin + 20), fontType, 0.5, color);
 ? ? ? ?}
 
 
 
 ? ? ? ?cv::imshow("demo", mBGR);
 ? ? ? ?cv::imwrite("demo.jpg",mBGR);
 
 ? ? ? ?cv::waitKey(1);
 
 
 ? ?}
 
 
 ? ?return 0;
}

整個(gè)代碼在哪吒開發(fā)板上進(jìn)行解碼,幀率達(dá)到30fps左右,還可以,圖片就不上傳了,大家可以自己評測,前提安裝ffmpeg這個(gè)庫。

(2)v8的模型轉(zhuǎn)換和開發(fā)板上推理,這個(gè)地方一定要保證opset=11,如果是14是不可以的,模型轉(zhuǎn)換可以在開發(fā)板上轉(zhuǎn)換就行。

ubuntu@ubuntu:~$ pip install ultralytics轉(zhuǎn)換代碼

ubuntu@ubuntu:~$ cat convert_yolov8.py
from ultralytics import YOLO
 
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
 
# Use the model
# model.train(data="coco8.yaml", epochs=3) # train the model
# metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format
path = model.export(format="openvino",opset=11) # export the model to ONNX format
cmkelists.txt


cmake_minimum_required(VERSION 3.12)
project(yolov8_openvino_example)
 
set(CMAKE_CXX_STANDARD 14)
 
find_package(OpenCV REQUIRED)
 
include_directories(
  ${OpenCV_INCLUDE_DIRS}
  /opt/intel/openvino_2023/runtime/include
)
 
add_executable(detect 
  main.cc
  inference.cc
)
 
target_link_libraries(detect
  ${OpenCV_LIBS}
   /opt/intel/openvino_2023/runtime/lib/intel64/libopenvino.so
)

測試代碼使用官方的即可 ultralytics/examples/YOLOv8-OpenVINO-CPP-Inference at main · ultralytics/ultralytics · GitHub

be01e6a2-d969-11ef-9310-92fbcf53809c.jpg

(3)增加板子使用OpenVINO推理+板子CPU/ffmpeg解碼+推流;oak相機(jī)測距代碼就不添加了。

be2a7784-d969-11ef-9310-92fbcf53809c.png

發(fā)現(xiàn)這個(gè)模型還是比較重,添加到推理端有點(diǎn)小卡,先不加了,先用CPU進(jìn)行編解碼推流吧,測試目錄和GitHub地址如下,效果圖如下:

be50bcdc-d969-11ef-9310-92fbcf53809c.png

拉流設(shè)置命令

github:https://github.com/sxj731533730/OAK_Rtserver.git

參考資料:

[1] OAK相機(jī)如何將yoloV5lite模型轉(zhuǎn)換成blob格式?_oak china yolov5模型轉(zhuǎn)換-CSDN博客

https://blog.csdn.net/oakchina/article/details/129403986

[2]https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/pose-estimation-webcam

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報(bào)投訴
  • 英特爾
    +關(guān)注

    關(guān)注

    61

    文章

    10162

    瀏覽量

    173792
  • 開發(fā)板
    +關(guān)注

    關(guān)注

    25

    文章

    5472

    瀏覽量

    101748

原文標(biāo)題:開發(fā)者實(shí)戰(zhàn)|英特爾開發(fā)板試用:結(jié)合oak深度相機(jī)進(jìn)行評測

文章出處:【微信號:英特爾物聯(lián)網(wǎng),微信公眾號:英特爾物聯(lián)網(wǎng)】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

收藏 人收藏

    評論

    相關(guān)推薦
    熱點(diǎn)推薦

    世紀(jì)大并購!傳高通有意整體收購英特爾英特爾最新回應(yīng)

    電子發(fā)燒友網(wǎng)報(bào)道(文/吳子鵬)9月21日,《華爾街日報(bào)》發(fā)布博文稱,高通公司有意整體收購英特爾公司,而不是僅僅收購芯片設(shè)計(jì)部門。“最近幾天,高通已經(jīng)接觸了芯片制造商英特爾。”報(bào)道稱,這筆交易還遠(yuǎn)未
    的頭像 發(fā)表于 09-22 05:21 ?3486次閱讀
    世紀(jì)大并購!傳高通有意整體收購<b class='flag-5'>英特爾</b>,<b class='flag-5'>英特爾</b>最新回應(yīng)

    英特爾發(fā)布全新GPU,AI和工作站迎來新選擇

    英特爾推出面向準(zhǔn)專業(yè)用戶和AI開發(fā)者的英特爾銳炫Pro GPU系列,發(fā)布英特爾? Gaudi 3 AI加速器機(jī)架級和PCIe部署方案 ? 2025 年 5 月 19 日,北京 ——今日
    發(fā)表于 05-20 11:03 ?1208次閱讀

    開發(fā)板評測大賽開啟!頂級開發(fā)板等你來戰(zhàn)!

    技術(shù)人的狂歡,開發(fā)者的盛宴!2025年最值得期待的硬核賽事——電子發(fā)燒友開發(fā)板評測大賽正式啟動(dòng)!無論你是開源生態(tài)的探索者、芯片架構(gòu)的極客,還是物聯(lián)網(wǎng)領(lǐng)
    的頭像 發(fā)表于 05-20 08:07 ?54次閱讀
    <b class='flag-5'>開發(fā)板</b><b class='flag-5'>評測</b>大賽開啟!頂級<b class='flag-5'>開發(fā)板</b>等你來戰(zhàn)!

    請問OpenVINO?工具套件英特爾?Distribution是否與Windows? 10物聯(lián)網(wǎng)企業(yè)版兼容?

    無法在基于 Windows? 10 物聯(lián)網(wǎng)企業(yè)版的目標(biāo)系統(tǒng)上使用 英特爾? Distribution OpenVINO? 2021* 版本推斷模型。
    發(fā)表于 03-05 08:32

    英特爾?獨(dú)立顯卡與OpenVINO?工具套件結(jié)合使用時(shí),無法運(yùn)行推理怎么解決?

    使用英特爾?獨(dú)立顯卡與OpenVINO?工具套件時(shí)無法運(yùn)行推理
    發(fā)表于 03-05 06:56

    英特爾帶您解鎖云上智算新引擎

    在近日舉辦的2024火山引擎FORCE原動(dòng)力大會(huì)上,英特爾與火山引擎聯(lián)合發(fā)布基于英特爾 至強(qiáng) 6 性能核處理器的第四代服務(wù)器實(shí)例,以打造彈性算力底座的產(chǎn)品化實(shí)踐。同時(shí),英特爾也攜手扣子共同推出Coze-AIPC端云協(xié)同智能體
    的頭像 發(fā)表于 12-23 14:05 ?774次閱讀

    基于英特爾開發(fā)板開發(fā)ROS應(yīng)用

    隨著智能機(jī)器人技術(shù)的快速發(fā)展,越來越多的研究者和開發(fā)者開始涉足這一充滿挑戰(zhàn)和機(jī)遇的領(lǐng)域。哪吒開發(fā)板,作為一款高性能的機(jī)器人開發(fā)平臺,憑借其強(qiáng)大的計(jì)算能力和豐富的接口,為機(jī)器人愛好者和專業(yè)人士提供了一個(gè)理想的實(shí)驗(yàn)和
    的頭像 發(fā)表于 12-20 10:54 ?1675次閱讀
    基于<b class='flag-5'>英特爾</b><b class='flag-5'>開發(fā)板</b><b class='flag-5'>開發(fā)</b>ROS應(yīng)用

    英特爾推出全新英特爾銳炫B系列顯卡

    英特爾銳炫B580和B570 GPU以卓越價(jià)值為時(shí)新游戲帶來超凡表現(xiàn)。 ? > 今日,英特爾發(fā)布全新英特爾銳炫 B系列顯卡(代號Battlemage)。英特爾銳炫 B580和B570
    的頭像 發(fā)表于 12-07 10:16 ?1272次閱讀
    <b class='flag-5'>英特爾</b>推出全新<b class='flag-5'>英特爾</b>銳炫B系列顯卡

    英特爾CEO Gelsinger宣布退休

    近日,英特爾公司宣布其首席執(zhí)行官Pat Gelsinger即將退休。這一消息發(fā)布后,英特爾的美股在盤前交易中上漲了近4%。同時(shí),英特爾宣布任命Zinsner和Johnston Holthaus為臨時(shí)
    的頭像 發(fā)表于 12-03 10:55 ?549次閱讀

    基于哪吒開發(fā)板部署YOLOv8模型

    2024英特爾 “走近開發(fā)者”互動(dòng)活動(dòng)-哪吒開發(fā)套件免費(fèi)試 用 AI 創(chuàng)新計(jì)劃:哪吒開發(fā)板是專為支持入門級邊緣 AI 應(yīng)用程序和設(shè)備而設(shè)計(jì),能夠滿足人工智能學(xué)習(xí)、
    的頭像 發(fā)表于 11-15 14:13 ?925次閱讀
    基于哪吒<b class='flag-5'>開發(fā)板</b>部署YOLOv8模型

    英特爾考慮出售Altera股權(quán)

    近日,英特爾(Intel)正積極尋求出售其可編程芯片制造子公司Altera的股權(quán),并考慮引入戰(zhàn)略投資或PE投資。據(jù)悉,英特爾對Altera的估值約為170億美元,而英特爾于2015年以167億美元的價(jià)格收購了這家公司。
    的頭像 發(fā)表于 10-21 15:42 ?784次閱讀

    英特爾至強(qiáng)品牌新戰(zhàn)略發(fā)布

    品牌是企業(yè)使命和發(fā)展的象征,也承載著產(chǎn)品特質(zhì)和市場認(rèn)可。在英特爾GTC科技體驗(yàn)中心的英特爾 至強(qiáng) 6 能效核處理器發(fā)布會(huì)上,英特爾公司全球副總裁兼首席市場營銷官Brett Hannath宣布推出全新的
    的頭像 發(fā)表于 10-12 10:13 ?744次閱讀

    英特爾IT的發(fā)展現(xiàn)狀和創(chuàng)新動(dòng)向

    AI大模型的爆發(fā),客觀上給IT的發(fā)展帶來了巨大的機(jī)會(huì)。作為把IT發(fā)展上升為戰(zhàn)略高度的英特爾,自然在推動(dòng)IT發(fā)展中注入了強(qiáng)勁動(dòng)力。英特爾IT不僅專注于創(chuàng)新、AI和優(yōu)化,以及英特爾員工、最終用戶和
    的頭像 發(fā)表于 08-16 15:22 ?828次閱讀

    英特爾是如何實(shí)現(xiàn)玻璃基板的?

    。 雖然玻璃基板對整個(gè)半導(dǎo)體行業(yè)而言并不陌生,但憑借龐大的制造規(guī)模和優(yōu)秀的技術(shù)人才,英特爾將其提升到了一個(gè)新的水平。近日,英特爾封裝測試技術(shù)開發(fā)(Assembly Test Technology Development)部門介紹
    的頭像 發(fā)表于 07-22 16:37 ?559次閱讀

    英特爾CEO:AI時(shí)代英特爾動(dòng)力不減

    英特爾CEO帕特·基辛格堅(jiān)信,在AI技術(shù)的飛速發(fā)展之下,英特爾的處理器仍能保持其核心地位。基辛格公開表示,摩爾定律仍然有效,而英特爾在處理器和芯片技術(shù)上的創(chuàng)新能力將持續(xù)驅(qū)動(dòng)公司前進(jìn)。
    的頭像 發(fā)表于 06-06 10:04 ?629次閱讀