参考:Yolov5训练自己的数据集(详细完整版)_yolov5缔宇-CSDN博客
先看完大佬的博客后再看下面的问题

1、不同数据集标签格式不知道如何划分

首先明确你打的标签是哪种类型

(1)xml后缀的标签文件

直接按照大佬博客那样,先split_train_val.py、后xml_to_yolo.py新建一个 myvoc.yaml文件,然后就OK了。

为了大家能看懂我把大佬的代码也复制下来了,以便大家观摩

split_train_val(xml格式).py如下:生成ImageSets文件夹。先对数据集进行划分,这边直接取消测试集,训练集和验证集的比例为8:2

import os
import random
import argparse

parser = argparse.ArgumentParser()
#xml文件的地址,根据自己的数据进行修改 xml一般存放在Annotations下
parser.add_argument('--xml_path', default='Annotations', type=str, help='input xml label path')
#数据集的划分,地址选择自己数据下的ImageSets/Main
parser.add_argument('--txt_path', default='ImageSets/Main', type=str, help='output txt label path')
opt = parser.parse_args()

trainval_percent = 1  # 训练集和验证集所占比例。 这里没有划分测试集
train_percent = 0.8     # 训练集所占比例,可自己进行调整
xmlfilepath = opt.xml_path
txtsavepath = opt.txt_path
total_xml = os.listdir(xmlfilepath)
if not os.path.exists(txtsavepath):
    os.makedirs(txtsavepath)

num = len(total_xml)          # 100
list_index = range(num)
tv = int(num * trainval_percent)       #80
tr = int(tv * train_percent)          #80*0.7=56
trainval = random.sample(list_index, tv)
train = random.sample(trainval, tr)

file_trainval = open(txtsavepath + '/trainval.txt', 'w')
file_test = open(txtsavepath + '/test.txt', 'w')
file_train = open(txtsavepath + '/train.txt', 'w')
file_val = open(txtsavepath + '/val.txt', 'w')

for i in list_index:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        file_trainval.write(name)
        if i in train:
            file_train.write(name)
        else:
            file_val.write(name)
    else:
        file_test.write(name)

file_trainval.close()
file_train.close()
file_val.close()
file_test.close()

xml_to_yolo.py如下:先将xml格式的标签转换为txt格式,然后生成的dataSet_path文件夹下train.txt、val.txt、test.txt。相应要改的路径这边就不再赘述,详情看参考大佬的博客

import xml.etree.ElementTree as ET
import os
from os import getcwd

sets = ['train', 'val', 'test']
classes = ['car', 'person'] # 填入自己的标签
abs_path = os.getcwd()
print(abs_path) 


def convert(size, box):
    dw = 1. / (size[0])
    dh = 1. / (size[1])
    x = (box[0] + box[1]) / 2.0 - 1
    y = (box[2] + box[3]) / 2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return x, y, w, h


def convert_annotation(image_id):
    in_file = open('E:/deeplearning/yolov11-main/VOCData/Annotations/%s.xml' % (image_id), encoding='UTF-8')
    out_file = open('E:/deeplearning/yolov11-main/VOCData/labels/%s.txt' % (image_id), 'w')
    tree = ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)
    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        # difficult = obj.find('Difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult) == 1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
             float(xmlbox.find('ymax').text))
        b1, b2, b3, b4 = b
        # 标注越界修正
        if b2 > w:
            b2 = w
        if b4 > h:
            b4 = h
        b = (b1, b2, b3, b4)
        bb = convert((w, h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')


wd = getcwd()
for image_set in sets:
    if not os.path.exists('E:/deeplearning/yolov11-main/VOCData/labels/'):
        os.makedirs('E:/deeplearning/yolov11-main/VOCData/labels/')
    image_ids = open('E:/deeplearning/yolov11-main/VOCData/ImageSets/Main/%s.txt' % (image_set)).read().strip().split()

    if not os.path.exists('E:/deeplearning/yolov11-main/VOCData/dataSet_path/'):
        os.makedirs('E:/deeplearning/yolov11-main/VOCData/dataSet_path/')

    list_file = open('dataSet_path/%s.txt' % (image_set), 'w')
    # 这行路径不需更改,这是相对路径
    for image_id in image_ids:
        list_file.write('E:/deeplearning/yolov11-main/VOCData/images/%s.jpg\n' % (image_id))
        convert_annotation(image_id)
    list_file.close()

myvoc.yaml文件如下:xml_to_yolo.py生成的dataSet_path文件夹中的相应文件的路径和下面一致

train: E:/deeplearning/yolov11-main/VOCData/dataSet_path/train.txt
val: E:/deeplearning/yolov11-main/VOCData/dataSet_path/val.txt

# number of classes
nc: 2

# class names
names: ['car', 'person']

(2)YOLO格式的txt后缀的标签文件

split_train_val(txt格式).py如下:生成ImageSets文件夹。先对数据集进行划分,这边直接取消测试集,训练集和验证集的比例为8:2

import os
import random
import argparse

# 设置随机种子
random_seed = 42  # 可以设置为任意整数
random.seed(random_seed)

parser = argparse.ArgumentParser()
#xml文件的地址,根据自己的数据进行修改 xml一般存放在Annotations下
parser.add_argument('--xml_path', default='labels', type=str, help='input xml label path')
#数据集的划分,地址选择自己数据下的ImageSets/Main
parser.add_argument('--txt_path', default='ImageSets/Main', type=str, help='output txt label path')
opt = parser.parse_args()

trainval_percent = 1  # 训练集和验证集所占比例。 这里没有划分测试集
train_percent = 0.8     # 训练集所占比例,可自己进行调整
xmlfilepath = opt.xml_path
txtsavepath = opt.txt_path
total_xml = os.listdir(xmlfilepath)
if not os.path.exists(txtsavepath):
    os.makedirs(txtsavepath)

num = len(total_xml)          # 100
list_index = range(num)
tv = int(num * trainval_percent)       #80
tr = int(tv * train_percent)          #80*0.7=56
trainval = random.sample(list_index, tv)
train = random.sample(trainval, tr)

file_trainval = open(txtsavepath + '/trainval.txt', 'w')
file_test = open(txtsavepath + '/test.txt', 'w')
file_train = open(txtsavepath + '/train.txt', 'w')
file_val = open(txtsavepath + '/val.txt', 'w')

for i in list_index:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        file_trainval.write(name)
        if i in train:
            file_train.write(name)
        else:
            file_val.write(name)
    else:
        file_test.write(name)

file_trainval.close()
file_train.close()
file_val.close()
file_test.close()

yolo_fenli.py如下:直接生成的dataSet_path文件夹下train.txt、val.txt、test.txt。

import os
from os import getcwd

# 定义数据集划分和类别
sets = ['train', 'val', 'test']
classes = ['boat']

# 获取当前工作目录
current_dir = getcwd()
print("当前工作目录:", current_dir)

# 定义 VOCData 的基础路径(相对路径)
voc_data_dir = os.path.join(current_dir, "")

# 检查并创建必要的目录
os.makedirs(os.path.join(voc_data_dir, "labels"), exist_ok=True)
os.makedirs(os.path.join(voc_data_dir, "dataSet_path"), exist_ok=True)

for image_set in sets:
    # 读取 ImageSets/Main 下的划分文件(如 train.txt, val.txt, test.txt)
    image_set_path = os.path.join(voc_data_dir, "ImageSets", "Main", f"{image_set}.txt")
    with open(image_set_path, 'r') as f:
        image_ids = f.read().strip().split()
    
    # 创建 dataSet_path 下的路径文件(如 train.txt, val.txt, test.txt)
    dataset_path_file = os.path.join(voc_data_dir, "dataSet_path", f"{image_set}.txt")
    with open(dataset_path_file, 'w') as list_file:
        for image_id in image_ids:
            # 写入图片的绝对路径(如 VOCData/images/xxx.jpg)
            img_path = os.path.join(voc_data_dir, "images", f"{image_id}.jpg")
            list_file.write(f"{img_path}\n")
            
            # 检查对应的标签文件是否存在(如 VOCData/labels/xxx.txt)
            label_path = os.path.join(voc_data_dir, "labels", f"{image_id}.txt")
            if not os.path.exists(label_path):
                print(f"警告: 未找到标签文件 {label_path}")

print("处理完成!")

 myvoc.yaml文件如下:yolo_fenli.py生成的dataSet_path文件夹中的相应文件的路径和下面一致

train: E:/deeplearning/yolov11-main/VOCData/dataSet_path/train.txt
val: E:/deeplearning/yolov11-main/VOCData/dataSet_path/val.txt

# number of classes
nc: 2

# class names
names: ['car', 'person']

(3)json后缀的标签文件(来源于MS COCO数据集,是目前学术界和工业界最主流的格式)

问一下AI格式怎么转换其实就行

2、不同数据集标签格式的优缺点

在实际项目中,一个常见且高效的工作流是:

  1. 标注阶段:使用标注工具(如LabelImg等)导出为 PASCAL VOC XML 或 COCO JSON

  2. 管理与版本控制:将原始标注文件(XML或JSON)和图片一起归档,作为数据集的“源文件”。

  3. 训练准备:当需要训练YOLO时,通过一个转换脚本(自写脚本)将源格式转换为 YOLO TXT 格式。

  4. 训练阶段:使用转换后的YOLO TXT格式进行模型训练。

Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐