Test Samples

In order to evaluate the performance of trained classifier a collection of marked up images is needed. When such collection is not available test samples may be created from single object image by createsamples utility. The scheme of test samples creation in this case is similar to training samples creation since each test sample is a background image into which a randomly distorted and randomly scaled instance of the object picture is pasted at a random position.

 

If both -img and -info arguments are specified then test samples will be created by createsamples utility. The sample image is arbitrary distorted as it was described below, then it is placed at random location to background image and stored. The corresponding description line is added to the file specified by -info argument.

 

The -w and -h keys determine the minimal size of placed object picture.

 

The test image file name format is as follows:

imageOrderNumber_x_y_width_height.jpg, where x, y, width and height are the coordinates of placed object bounding rectangle.

Note that you should use a background images set different from the background image set used during training.

Performance Evaluation

In order to evaluate the performance of the classifier performance utility may be used. It takes a collection of marked up images, applies the classifier and outputs the performance, i.e. number of found objects, number of missed objects, number of false alarms and other information.

Here is a list of options of the performance utility

Usage: ./performance

-data <classifier_directory_name>

-info <collection_file_name>

[-maxSizeDiff <max_size_difference = 1.500000>]

[-maxPosDiff <max_position_difference = 0.300000>]

[-sf <scale_factor = 1.200000>]

[-ni]

[-nos <number_of_stages = -1>]

[-rs <roc_size = 40>]

[-w <sample_width = 24>]

[-h <sample_height = 24>]

Command line arguments:

- data <dir_name>

directory name in which the trained classifier is stored. A haarcascade xml file can also be specified. In that case -w and -h options are not necessary and ignored because the haarcascade xml file includes the infomation. FYI: cvLoadHaarClassifierCascade function used in the performance utility supports both classifier directory and haarcascade xml file, but this function is obsolete.

- w <sample_width>,

- h <sample_height>

Size of training samples (in pixels). Must have exactly the same values as used during training (utility haartraining)

- info <tests_collection_file_name>

file with test samples description

- maxSizeDiff <max_size_difference>,

- maxPosDiff <max_position_difference>

determine the criterion of reference and detected rectangles coincidence. Default values are 1.5 and 0.3 respectively.

- sf <scale_factor>,

detection parameter. Default value is 1.2. Enlarge window sizes by multiplying with this number until exceeding the size of the picture.

- ni

Do not save resulted image files of detection. As default, the performance utility requires directories which prefix 'det-' is added to test image directories to store the resulted image files showing positions of detected objects by rectangles. For example, if a test image file has a name as "tests/01/img01.bmp/0001_0341_0241_0039_0039.jpg", we have to create a directory "det-tests/01/img01.bmp" beforehand, otherwise, we will see an error message as "OpenCV ERROR: Unspecified error (could not save the image) in function cvSaveImage, loadsave.cpp(520)". We can avoid the error with -ni option or by creating directories as

$ cat <tests_collection_file_name> | perl -pe 's!^(.*)/.*$!det-$1!g' | xargs mkdir -p

 

- rs <roc_size>

Resolution of Receiver Operating Curves (ROCs). Default value is 40. This is not a parameter for detection, but for outputs (just required for malloc)

 

An output of the performance utility is as follows:

+================================+======+======+======+

| File Name | Hits |Missed| False|

+================================+======+======+======+

|tests/01/img01.bmp/0001_0153_005| 0| 1| 0|

+--------------------------------+------+------+------+

....

+--------------------------------+------+------+------+

| Total| 874| 554| 72|

+================================+======+======+======+

Number of stages: 15

Number of weak classifiers: 68

Total time: 115.000000

15

874 72 0.612045 0.050420

874 72 0.612045 0.050420

360 2 0.252101 0.001401

115 0 0.080532 0.000000

26 0 0.018207 0.000000

8 0 0.005602 0.000000

4 0 0.002801 0.000000

1 0 0.000700 0.000000

....

'Hits' shows the number of correct detections. 'Missed' shows the number of missed detections or false negatives (Truly there exists, but the detector missed to detect it). 'False' shows the number of false alarms or false positives (Truly there does not exist, but the detector alarmed as there exists.)

 

The latter table is for ROC plot. ROC shows how well we can correctly detects when we allow some false alarm probabilities. The simplest way to detect everything is to alarm always. Refer Receiver Operating Curves (ROCs).You may plot it as following matlab codes:

>> ROC = [ 874 72 0.612045 0.050420

874 72 0.612045 0.050420

360 2 0.252101 0.001401

115 0 0.080532 0.000000

26 0 0.018207 0.000000

8 0 0.005602 0.000000

4 0 0.002801 0.000000

1 0 0.000700 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000

0 0 0.000000 0.000000