Formats
meituan/yolov6

meituan/yolov6

MT-YOLOv6 uses a modified version of YOLO Darknet annotations.
Formats

meituan/yolov6

Below, we show how to convert data to and from

meituan/yolov6

. We also list popular models that use the

meituan/yolov6

data format. Our conversion tools are free to use.

CONVERT To
CONVERT From
EXAMPLE

Convert Data to meituan/yolov6

Use Roboflow to convert
meituan/yolov6
to the following formats.

Roboflow is a trusted solution for converting and managing your data. Today, over 250,000 datasets are managed on Roboflow, comprised of 100 million labeled and annotated images.

With Roboflow, you get a solution with:

Free data conversion

SOC II Type 2 Compliant

Trusted by 250,000+ developers

Roboflow is a trusted solution for converting and managing your data. Today, over 100,000 datasets are managed on Roboflow, comprised of 100 million labeled and annotated images.

Once your data is in Roboflow, just add the link from your dataset and you're ready to go. We even include the code to export to common inference formats like TFLite, ONNX, and CoreML.

Below are pre-configured models that use the
meituan/yolov6
data format
.

What computer vision models use meituan/yolov6?

The

MT-YOLOv6

,

models all use the

meituan/yolov6

data format.

To see our entire list of computer vision models, check out the Roboflow Model Library.

The annotation format is the same as YOLOv5 but with changes to the YAML file. Each image has one txt file with a single line for each bounding box. The format of each row is

class_id center_x center_y width height

where fields are space delimited, and the coordinates are normalized from zero to one.

Note: To convert to normalized xywh from pixel values, divide x (and width) by the image's width and divide y (and height) by the image's height.

001.txt
1 0.617 0.3594420600858369 0.114 0.17381974248927037
1 0.094 0.38626609442060084 0.156 0.23605150214592274
1 0.295 0.3959227467811159 0.13 0.19527896995708155
1 0.785 0.398068669527897 0.07 0.14377682403433475
1 0.886 0.40879828326180256 0.124 0.18240343347639484
1 0.723 0.398068669527897 0.102 0.1609442060085837
1 0.541 0.35085836909871243 0.094 0.16952789699570817
1 0.428 0.4334763948497854 0.068 0.1072961373390558
1 0.375 0.40236051502145925 0.054 0.1351931330472103
1 0.976 0.3927038626609442 0.044 0.17167381974248927

The `data.yaml` file contains configuration values used by the model to locate images and map class names to class_id's.

data.yaml
train: ./images/train
val: ./images/valid
test: ./images/test

nc: 3
names: ['head', 'helmet', 'person']