女人自慰AV免费观看内涵网,日韩国产剧情在线观看网址,神马电影网特片网,最新一级电影欧美,在线观看亚洲欧美日韩,黄色视频在线播放免费观看,ABO涨奶期羡澄,第一导航fulione,美女主播操b

0
  • 聊天消息
  • 系統消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術視頻
  • 寫文章/發帖/加入社區
會員中心
創作中心

完善資料讓更多小伙伴認識你,還能領取20積分哦,立即完善>

3天內不再提示

第三章:訓練圖像估計光照度算法模型

Red Linux ? 來源:Red Linux ? 作者:Red Linux ? 2024-11-06 15:57 ? 次閱讀

前言

這一篇就到了圖像估計光照度算法章節,這篇我主要記錄如何使用 tensorflow2 訓練一個從圖片中估計光照度的算法。一般的流程是拍攝多張圖片以及用光照度計來檢測其光照度值,分別作為輸入和輸出。但是在本章呢,為了起到演示的作用,數據集我會使用 [MIT-Adobe FiveK Dataset] 。光照度值呢,我使用圖片的 rgb 數值經過算法r0.2126+g0.7152+b*0.0722計算亮度。這樣就有了一定數量的數據集。也就有基礎進行后續的訓練和測試了。下面準備進入正文。

數據獲取

因為 MIT-Adobe FiveK Dataset 數據集包含了 5000 張原始 dng 圖像和 5 和專家A,B,C,D,E進行處理之后的 tiff 圖像(一般地,這個數據集是用來訓練圖像增強相關的模型的,我這里就用來訓練光照度估計算法了,嘿嘿)。因為完整的數據壓縮包太大了~50GB。受限電腦的容量和速度,我選擇了使用腳本逐個下載這些圖片(因為這些圖片的下載路徑有規律,再加上這些圖片的名字在官網可以下載下來,所以腳本就讀取包含圖片名字的文件,然后逐個拼接下載路徑,使用 curl 工具完成下載)。這里,我選擇了下載原始 dng 圖片和專家 C 的 tiff 圖片。
下載 dng 原始文件的腳本是:

#!/usr/bin/bash

#改變當前工作路徑
CURRENT_PATH="/home/red/Downloads/fivek_dataset/expertc"
#本文件所在路徑
cd ${CURRENT_PATH}
#改變當前路徑

#存儲圖像名稱的list
img_lst=[]
#讀取圖片名列表
files_name=`cat filesAdobe.txt`
files_mit_name=`cat filesAdobeMIT.txt`

j=0
for i in ${files_name};do
    # https://data.csail.mit.edu/graphics/fivek/img/dng/a0001-jmac_DSC1459.dng
    URL='https://data.csail.mit.edu/graphics/fivek/img/dng/'${i}'.dng'
    file_cur=${URL##*/}
    echo "Downloading ${URL}@${j}"
    j=$((j+1))
    if [ -f ${file_cur} ];then
        echo "${file_cur} exist"
    else
        # echo "${file_cur} no exist, it's you"
        # break
        curl -O ${URL}
    fi
done

下載專家 C 處理后的文件腳本是:

#!/usr/bin/bash

#改變當前工作路徑
CURRENT_PATH="/home/red/Downloads/fivek_dataset/expertc"
#本文件所在路徑
cd ${CURRENT_PATH}
#改變當前路徑

#存儲圖像名稱的list
img_lst=[]
#讀取圖片名列表
files_name=`cat filesAdobe.txt`
files_mit_name=`cat filesAdobeMIT.txt`

j=0
for i in ${files_name};do
    #下載由 expert C 所調整的圖像(可根據需要下載其它的四類圖像)
    URL='https://data.csail.mit.edu/graphics/fivek/img/tiff16_c/'${i}'.tif'
    file_cur=${URL##*/}
    echo "Downloading ${URL}@${j}"
    j=$((j+1))
    if [ -f ${file_cur} ];then
        echo "${file_cur} exist"
    else
        echo "${file_cur} no exist, it's you"
        # break
        curl -O ${URL}
    fi
done

經過了好幾天斷斷續續的下載,最后我一共得到了 1000 張左右圖片。有了圖片之后,下一步就是計算光照度了,這里使用 python 腳本和 pillow 包完成,為了后續移植到 AI300G 上,我將圖片縮放到了統一的 255*255。并且將計算的光照度和圖像的名稱存儲到一個 csv 文件。這部分腳本如下:

#!/bin/env python3

import sys
import csv
import os
import re

from PIL import Image

gs_illumiance_csv_file_fd=0
gs_illumiance_csv_file_name='illumiance_estimate.csv'
gs_illumiance_data_list=[['Name', 'Illuminance']]
DEST_DIR_NAME=r'PNG255'

def illuname_estimate(t):
    r,g,b=t
    return r*0.2126+g*0.7152+b*0.0722


def get_pic_pixels(pic_name):
    with Image.open(pic_name) as pic:
        ans=0
        pic=pic.resize((255,255))
        print(f'raw name:{pic_name}')
        match=re.match(r'w+/(S+).w+', pic_name)
        if match:
            basename=match.group(1)
            basename=DEST_DIR_NAME+'/'+basename+'.png'
            print(f'new name:{basename}')
            pic.save(basename)
            #  pic.show()
        width, heigh = pic.size
        for x in range(width):
            for y in range(heigh):
                r, g, b = pic.getpixel((x, y))
                ans=ans+illuname_estimate((r,g,b))

    # 光照度取整
    ans=round(ans)
    print(f'{pic_name}: illuname ans:{ans}')
    return ans

def insert_item(pic_name, illumiance_estimate):
    global gs_illumiance_csv_file_fd
    global gs_illumiance_csv_file_name
    global gs_illumiance_data_list
    item_template=['NONE', -1]
    item_template[0]=pic_name
    item_template[1]=illumiance_estimate
    gs_illumiance_data_list.append(item_template)

def do_with_dir(dir_name):
    for filename in os.listdir(dir_name):
        filepath=os.path.join(dir_name, filename)
        if (os.path.isfile(filepath)):
            print("do input %s" %(filepath))
            ans=get_pic_pixels(filepath)
            insert_item(filename, ans)
            #  return

if len(sys.argv) > 1:
    print("do input dir:%s" %(sys.argv[1]))
    if not os.path.exists(DEST_DIR_NAME):
        os.makedirs(DEST_DIR_NAME)
    do_with_dir(sys.argv[1])
    gs_illumiance_csv_file_fd=open(gs_illumiance_csv_file_name, 'w', newline='')
    csv.writer(gs_illumiance_csv_file_fd).writerows(gs_illumiance_data_list)
else:
    print("Please input pic name")

這樣就得到了類似下面的數據集:

? head illumiance_estimate.csv
Name,Illuminance
a0351-MB_070908_006_dng.jpeg,3680630
a0100-AlexWed07-9691_dng.jpeg,1258657
a0147-kme_333.jpeg,5168820
a0261-_DSC2228_dng.jpeg,2571498
a0255-_DSC1448.jpeg,8747593
a0054-kme_097.jpeg,5351908
a0393-_DSC0040.jpeg,1783394
a0304-dgw_137_dng.jpeg,3118835
a0437-jmacDSC_0011.jpeg,6140107

至此有了一定數量的數據集(這里我使用了667張照片),接下來就是模型訓練了。

模型訓練

模型訓練的基本思想就是,首先將數據集按比例(4:1)拆分為訓練集和測試集,然后使用 tensorflow 建立模型訓練參數進行檢驗。
大概流程是:

  1. 首先是根據 csv 文件建立 tensorflow dataset 格式的數據集;
  2. 建立模型使用數據集進行模型訓練和測試

這部分代碼為:

#!/usr/bin/python3.11

TF_ENABLE_ONEDNN_OPTS=0

import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import pathlib
import csv
import pandas as pd
import tensorflow.data
import sys
import matplotlib.pyplot as plt

AUTOTUNE=tensorflow.data.AUTOTUNE
BATCH_SIZE=32
IMG_WIDTH=255
IMG_HEIGHT=255
ILLUMINACE_FILE=r'illumiance_estimate.csv'
print(tf.__version__)

import tensorflow as tf
import pandas as pd

image_count = len(os.listdir(r'JP'))
print(f'whole img count={image_count}')
# 假設CSV文件有兩列:'image_path' 和 'label'
df = pd.read_csv(ILLUMINACE_FILE)

# 將DataFrame轉換為TensorFlow可以處理的格式
image_paths = df['Name'].values
labels = df['Illuminance'].values
labels = labels.astype(np.float32)
labels /= 16777215.0

# 創建一個Dataset
gs_dataset = tf.data.Dataset.from_tensor_slices((image_paths, labels))

print(type(gs_dataset))
print(gs_dataset)
print(r'-------------------------------------------')
# 定義一個函數來加載和預處理圖像
def load_and_preprocess_image(image_path, label):
    print(image_path)
    image_path='JP/'+image_path
    image = tf.io.read_file(image_path)
    image = tf.image.decode_jpeg(image, channels=3)
    image = tf.image.resize(image, [IMG_WIDTH, IMG_HEIGHT])
    #  image /= 255.0  # 歸一化
    return image, label

# 應用這個函數到Dataset上
gs_dataset = gs_dataset.map(load_and_preprocess_image)
# 打亂數據
gs_dataset = gs_dataset.shuffle(image_count, reshuffle_each_iteration=False)

val_size = int(image_count * 0.2)

gs_train_ds = gs_dataset.skip(val_size)
gs_val_ds = gs_dataset.take(val_size)

def configure_for_performance(ds):
    ds = ds.cache()
    ds = ds.shuffle(buffer_size=1000)
    ds = ds.batch(BATCH_SIZE)
    ds = ds.prefetch(buffer_size=AUTOTUNE)
    return ds

gs_train_ds = configure_for_performance(gs_train_ds)
gs_val_ds = configure_for_performance(gs_val_ds)

image_batch, illuminance_batch = next(iter(gs_train_ds))

#  plt.figure(figsize=(10, 10))

#  for i in range(9):
  #  ax = plt.subplot(3, 3, i + 1)
  #  print(image_batch[i])
  #  #  img_data=image_batch[i].numpy()*255.0
  #  #  plt.imshow(img_data.astype("uint8"))
  #  plt.imshow(image_batch[i].numpy().astype("uint8"))
  #  illuminance = illuminance_batch[i]
  #  plt.title(illuminance.numpy())
  #  plt.axis("off")

#  plt.show()

#  sys.exit()

model = tf.keras.Sequential([
  tf.keras.layers.Rescaling(1./255),
  tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(IMG_WIDTH, IMG_HEIGHT, 3)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(32, 3, activation='relu'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(1)
])

model.compile(
  optimizer='adam',
  loss='mean_squared_error')

model.fit(
  gs_train_ds,
  validation_data=gs_val_ds,
  epochs=12
)

model.save("illu_v01")

執行上述代碼,可以看到最后的 loss 和 val_loss 為:

? ./train_tf2_v2.py
2024-08-08 13:41:48.341117: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-08 13:41:48.342596: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:41:48.363696: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-08 13:41:48.363729: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-08 13:41:48.364549: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-08 13:41:48.368601: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:41:48.368762: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-08 13:41:48.801750: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2.15.0
whole img count=667
2024-08-08 13:41:51.138713: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-08 13:41:51.139135: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2256] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
< class 'tensorflow.python.data.ops.from_tensor_slices_op._TensorSliceDataset' >
< _TensorSliceDataset element_spec=(TensorSpec(shape=(), dtype=tf.string, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None)) >
-------------------------------------------
Tensor("args_0:0", shape=(), dtype=string)
Epoch 1/12
17/17 [==============================] - 11s 603ms/step - loss: 98.9302 - val_loss: 0.1012
Epoch 2/12
17/17 [==============================] - 8s 495ms/step - loss: 0.0493 - val_loss: 0.0043
Epoch 3/12
17/17 [==============================] - 8s 481ms/step - loss: 0.0078 - val_loss: 0.0043
Epoch 4/12
17/17 [==============================] - 8s 479ms/step - loss: 0.0025 - val_loss: 0.0040
Epoch 5/12
17/17 [==============================] - 8s 477ms/step - loss: 0.0023 - val_loss: 0.0029
Epoch 6/12
17/17 [==============================] - 8s 480ms/step - loss: 0.0021 - val_loss: 0.0028
Epoch 7/12
17/17 [==============================] - 8s 482ms/step - loss: 0.0020 - val_loss: 0.0028
Epoch 8/12
17/17 [==============================] - 8s 482ms/step - loss: 0.0019 - val_loss: 0.0027
Epoch 9/12
17/17 [==============================] - 8s 482ms/step - loss: 0.0018 - val_loss: 0.0026
Epoch 10/12
17/17 [==============================] - 8s 485ms/step - loss: 0.0017 - val_loss: 0.0026
Epoch 11/12
17/17 [==============================] - 8s 485ms/step - loss: 0.0015 - val_loss: 0.0023
Epoch 12/12
17/17 [==============================] - 8s 484ms/step - loss: 0.0011 - val_loss: 0.0020

并且模型也保存在了 illu_v01 目錄。

? ls illu_v01/
assets  fingerprint.pb  keras_metadata.pb  saved_model.pb  variables

模型測試

現在有可模型,下面就是測試下自己的模型,使用下述 python 代碼在 PC 端進行測試:

#!/usr/bin/python3.11

import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import pathlib
import csv
import pandas as pd
import tensorflow.data
import sys
import matplotlib.pyplot as plt

IMG_WIDTH=255
IMG_HEIGHT=255

reload_model=tf.keras.models.load_model("illu_v01")
image_path=r'./JP/a0001-jmac_DSC1459.jpeg'
if len(sys.argv) < 2:
    print('Please input some pic to predict')
    sys.exit()
else:
    image_path=sys.argv[1]


image = tf.io.read_file(image_path)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMG_WIDTH, IMG_HEIGHT])
image = tf.reshape(image, [1, IMG_WIDTH, IMG_HEIGHT, 3])

#  sys.exit()

predictions=reload_model.predict(image)
print(f'{image_path} ans={predictions*16777215}')

簡單測試下模型:

check_tf2.py JP/a0001-jmac_DSC1459.jpeg
2024-08-08 13:57:08.263506: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-08 13:57:08.264895: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.285614: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-08 13:57:08.285646: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-08 13:57:08.286510: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-08 13:57:08.290464: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.290608: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-08 13:57:08.725843: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-08-08 13:57:11.051710: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-08 13:57:11.051982: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2256] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
1/1 [==============================] - 0s 57ms/step
JP/a0001-jmac_DSC1459.jpeg ans=[[check_tf2.py JP/a0001-jmac_DSC1459.jpeg
2024-08-08 13:57:08.263506: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-08 13:57:08.264895: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.285614: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-08 13:57:08.285646: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-08 13:57:08.286510: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-08 13:57:08.290464: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-08 13:57:08.290608: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-08 13:57:08.725843: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-08-08 13:57:11.051710: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-08 13:57:11.051982: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2256] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
1/1 [==============================] - 0s 57ms/step
JP/a0001-jmac_DSC1459.jpeg ans=[[5459503.]].]]

發現估計的光照度值是 5459503 和實際的 5363799 對比一下還是有15%左右的誤差。但是目前為止,整個模型訓練測試流程已經完成,下一步在是PC端模擬拉流使用模型對圖像進行實時計算了,期待哦。

審核編輯 黃宇

聲明:本文內容及配圖由入駐作者撰寫或者入駐合作網站授權轉載。文章觀點僅代表作者本人,不代表電子發燒友網立場。文章及其配圖僅供工程師學習之用,如有內容侵權或者其他違規問題,請聯系本站處理。 舉報投訴
  • 算法
    +關注

    關注

    23

    文章

    4698

    瀏覽量

    94721
  • 模型
    +關注

    關注

    1

    文章

    3483

    瀏覽量

    49987
收藏 人收藏

    評論

    相關推薦
    熱點推薦

    第三章 開發環境搭建

    本章介紹了W55MH32開發環境搭建。常用工具 KEIL MDK功能強大,安裝需注意路徑等要點并安裝芯片包。還講解MDK5使用技巧,如文本美化、語法檢測、代碼編輯等實用功能。
    的頭像 發表于 05-26 09:40 ?342次閱讀
    <b class='flag-5'>第三章</b> 開發環境搭建

    第三章 仿真器介紹

    本篇文章我們介紹了W55MH32板載的WIZ-Link仿真器的使用方法,讓我們一起插上USB線開始下載、調試程序吧!
    的頭像 發表于 05-21 11:54 ?120次閱讀
    <b class='flag-5'>第三章</b> 仿真器介紹

    第三章 警報聯動】手把手教你玩轉新版正點原子云

    本帖最后由 jf_85110202 于 2025-3-13 14:43 編輯 【第三章 警報聯動】手把手教你玩轉新版正點原子云 新版原子云網址:原子云(點擊登錄原子云) 原子云特色功能:設置
    發表于 03-12 16:05

    智能光照度傳感器:精準測量,優化光照環境

    在當今科技日新月異的時代,智能設備已經滲透到我們生活的方方面面,從智能家居到工業自動化,再到現代農業,它們都在發揮著不可或缺的作用。而在這一系列的智能設備中,智能光照度傳感器以其獨特的功能和廣泛
    的頭像 發表于 03-10 08:39 ?273次閱讀

    第三章 干擾濾波技術

    文件過大,大家下載附件查看全文哦!
    發表于 03-04 14:13

    模型訓練:開源數據與算法的機遇與挑戰分析

    進行多方位的總結和梳理。 在第二《TOP 101-2024 大模型觀點》中,蘇州盛派網絡科技有限公司創始人兼首席架構師蘇震巍分析了大模型訓練過程中開源數據集和
    的頭像 發表于 02-20 10:40 ?484次閱讀
    大<b class='flag-5'>模型</b><b class='flag-5'>訓練</b>:開源數據與<b class='flag-5'>算法</b>的機遇與挑戰分析

    基于差分卷積神經網絡的低照度車牌圖像增強網絡

    網絡,將車牌的紋理信息解耦為水平垂直和對角線兩個方向,對不同尺度空間的低照度圖像進行紋理增強。為了避免增強結果局部過曝或低曝,該方法使用YCbCr顏色空間的損失函數來優化模型圖像
    的頭像 發表于 11-11 10:29 ?659次閱讀
    基于差分卷積神經網絡的低<b class='flag-5'>照度</b>車牌<b class='flag-5'>圖像</b>增強網絡

    索尼FCB-EV9500M的星光級低照度

    SONY FCB-EV9500M一體化攝像機模組搭載了先進的圖像傳感技術和圖像處理算法,能夠在極低的光照條件下依然呈現出清晰、細膩的畫質,在0.009Lx低
    的頭像 發表于 10-18 18:10 ?623次閱讀
    索尼FCB-EV9500M的星光級低<b class='flag-5'>照度</b>

    如何訓練ai大模型

    訓練AI大模型是一個復雜且耗時的過程,涉及多個關鍵步驟和細致的考量。 一、數據準備 1. 數據收集 確定數據類型 :根據模型的應用場景,確定需要收集的數據類型,如文本、圖像、音頻等。
    的頭像 發表于 10-17 18:17 ?2432次閱讀

    【「嵌入式Hypervisor:架構、原理與應用」閱讀體驗】+第三四章閱讀報告

    在深入閱讀了《嵌入式Hypervisor:架構、原理與應用》的第三、四后,我對嵌入式Hypervisor的設計與實現技術有了更為詳盡和系統的理解。以下是我對這兩內容的閱讀報告: 第三章
    發表于 10-09 18:29

    《DNK210使用指南 -CanMV版 V1.0》第三章 CanMV簡介

    第三章 CanMV簡介 本章將對CanMV進行簡單介紹本章分為如下幾個小節:3.1 初識CanMV 3.2 CanMV的應用開發方式 3.1 初識CanMVCanMV是嘉楠科技針對AIOT編程
    發表于 09-03 10:13

    迅為電子RK3588S開發板第三章Buildroot系統功能測試

    迅為電子RK3588S開發板第三章Buildroot系統功能測試
    的頭像 發表于 09-02 14:45 ?1296次閱讀
    迅為電子RK3588S開發板<b class='flag-5'>第三章</b>Buildroot系統功能測試

    人臉識別模型訓練流程

    據準備階段,需要收集大量的人臉圖像數據,并進行數據清洗、標注和增強等操作。 1.1 數據收集 數據收集是人臉識別模型訓練的第一步。可以通過網絡爬蟲、公開數據集、合作伙伴等途徑收集人臉圖像
    的頭像 發表于 07-04 09:19 ?1750次閱讀

    人臉識別模型訓練是什么意思

    人臉識別模型訓練是指通過大量的人臉數據,使用機器學習或深度學習算法訓練出一個能夠識別和分類人臉的模型。這個
    的頭像 發表于 07-04 09:16 ?1198次閱讀

    深度學習模型訓練過程詳解

    深度學習模型訓練是一個復雜且關鍵的過程,它涉及大量的數據、計算資源和精心設計的算法訓練一個深度學習模型,本質上是通過優化
    的頭像 發表于 07-01 16:13 ?2400次閱讀