Bench TalkBench Talk for Design Engineers | The Official Blog of Mouser Electronicshttps://www.mouser.in/blogSolve the Mystery of Vehicle Detection Algorithmhttps://www.mouser.in/blog/mystery-of-vehicle-detectionAll,EIT 2020: The Intelligent RevolutionTue, 20 Oct 2020 23:52:28 GMT<p><img alt="" src="/blog/Portals/11/Vehicle%20Detection%20AI_Theme%20Image_min.jpg" style="width: 600px; height: 400px;" title="" /></p>
<p style="font-size:10px;"><em>(Source: Zapp2Photo/Shutterstock.com)</em></p>
<p>As mysterious as vehicle detection might seem, the technology essentially boils down to a mathematical formula that calculates the pixel features of a specified area of an image and then determines the category to which the object belongs based on the corresponding features. Object detection methods can generally be broken down into the two steps of feature extraction and category determination, with the commonly used methods of support vector machines (SVMs) and histograms of oriented gradients (HOGs) being used in coordination with each other.</p>
<p>This article will introduce commonly used vehicle-detection algorithms, focusing on the aspects listed below to unravel the mysteries behind these algorithms and to provide the reader with a clear understanding of the machine learning process.</p>
<ul>
<li>Overview of Application Scenarios</li>
<li>A Detailed Description of HOG Feature Calculations</li>
<li>An Overview of the SVM Workflow</li>
<li>Comparison and Summary</li>
</ul>
<h2>Overview of Application Scenarios</h2>
<p>Vehicle-detection technology is widely used in the real world as illustrated in the following examples. The personal cars we drive on a regular basis sometimes have one or more onboard rearview cameras. The system is activated when another vehicle is about to pass within a certain distance to the rear. Once a vehicle is detected to the rear of the vehicle, an alert is issued and the driver is prompted to slow down (<strong>Figure 1</strong>). Another example can be found in applications pertaining to the field of automated driving, in which positions of surrounding cars are used to analyze their speeds, distances, and other factors before automatically adjusting the car's path in response.</p>
<p><img alt="" src="/blog/Portals/11/Figure%201%20Rearview%20Mirror%20Vehicle%20Detection.jpg" style="width: 600px; height: 360px;" title="" /></p>
<p><em><small><strong>Figure 1</strong>: A rearview camera can detect vehicles within the image and indicates the vehicles' positions using rectangular boxes. (Source: Just Super/Shutterstock.com)</small></em></p>
<p>Vehicle-detection systems are also widely used for traffic control and road condition monitoring (<strong>Figure 2</strong>). For example, the system pictured is placed at a tunnel entrance and counts daily traffic flow during a given time period and implements the corresponding restriction policies appropriately, thus reducing the occurrence of traffic accidents and providing information to drivers regarding where traffic is congested or light. This allows drivers to choose the best route for avoiding traffic. In addition, traffic-flow statistics can also be used by parking lots found in airports or train stations, with big-data analysis used to determine whether parking spaces are in high demand so that staff can respond and allocate resources accordingly.</p>
<p>Signals produced by vehicle detection systems can be combined with other technologies to detect traffic flow around a given traffic signal, and workers can leverage big data and artificial intelligence to calculate and determine a reasonable time interval for traffic lights.</p>
<p><img alt="" src="/blog/Portals/11/Figure%202%20Vehicle%20Detection%20in%20a%20City%20Street.jpg" style="width: 600px; height: 454px;" title="" /></p>
<p><em><small><strong>Figure 2</strong>: Application of a vehicle detection system in a road condition monitoring scenario. (Source: PaO_STUDIO/Shutterstock.com)</small></em></p>
<h2>Joint HOG and SVM Algorithms</h2>
<p>(40 − 16) / 8 + 1 = 4</p>
<p>A method for the detection of pedestrians using HOG combined with SVM was originally proposed by the French researcher Dalal in 2005 at the Conference on Computer Vision and Pattern Recognition in San Diego, Calif. Now the HOG+SVM approach has evolved to detect all kinds of objects, including the locations of vehicles and traffic lanes.</p>
<div>
<h2>HOG Feature Calculation</h2>
</div>
<p>1. HOG is a local feature extraction algorithm, and so this algorithm will not serve the purpose of detecting an object in a large image containing a complex background even if more features are extracted. The image needs to be cropped to be able to acquire the target object. It has been experimentally proven that the vehicle used as the target object must account for more than 80 percent of the image in order to obtain favorable results. The cropped local image is partitioned into blocks, and each block is extracted for features composed of individual cells.</p>
<p>(In each image, multiple pixels form a single cell, and multiple cells make up a block.)</p>
<p>The HOG feature calculation process is explained using <strong>Figure 3</strong> below as an example.</p>
<p>First, the entire picture is cropped to obtain a 40px x 40px image, after which we must define the following variables:</p>
<p><img alt="" src="/blog/Portals/11/Movement%20of%20a%20Block.jpg" style="width: 263px; height: 232px;" title="" /></p>
<p><em><small><strong>Figure 3</strong>: To illustrate the movement of a block in a cropped image, this figure shows that 4 cells form a block with a side length of 16px, and the image is then cropped to obtain a length and width of 40px x 40px. The corresponding step size is 1, meaning that movement is carried out one pixel at a time. (Source: Original artwork created for Mouser)</small></em></p>
<ol style="list-style-type:upper-alpha;">
<li>We define the movement step s, such as: s =1.</li>
<li>We further define the cell size in pixels, such as: 8 x 8.</li>
<li>We further define the block size, such as: each block consists of 2 x 2 = 4 cells.</li>
<li>Finally, we define the number of bins, setting the value as needed, such as: bin = 9. Each bin is used to store the calculated histogram gradient direction's accumulated value, as further explained below.</li>
</ol>
<p>2. The input image and color are standardized to reduce the interference of light or shadow on the detection accuracy of objects in the image, which is done by performing a gamma color correction and changing the image to grayscale (the principles underlying gamma color correction are ignored for the purposes of this article).</p>
<p>3. The size of the gradient is calculated.</p>
<p>The calculation method is explained using an example of a partial block belonging to a cell (<strong>Figure 4</strong>), and the formula used to calculate the midpoint for a pixel value of 25 is shown as follows.</p>
<p><img alt="" src="/blog/Portals/11/First%20Table.jpg" style="width: 245px; height: 217px;" title="" /></p>
<p><em><small><strong>Figure 4</strong>: Partial block size and pixel values for a single cell. (Source: Original artwork created for Mouser)</small></em></p>
<p>Using a reasonable definition based on a convolutional kernel approach, we can show experimentally that [-1, 0, 1] works best. The convolution kernel [-1, 0, 1] can be understood as a matrix used to calculate the gradient amplitude direction for each pixel. Therefore, we can use [-1, 0, 1] for the horizontal (x-axis positive direction, rightward) and [-1, 0, 1]T for the vertical (y-axis positive direction, upward) to perform horizontal and vertical gradient component calculations for each pixel in the image area. The squared sum and the root sign of the two gives the direction of the gradient at that point, and thus the formula used is as follows:</p>
<p><img alt="" src="/blog/Portals/11/Formula%20Before%20Figure%205.png" style="width: 395px; height: 148px;" title="" /></p>
<p>Therefore, the horizontal direction of a point with pixel value 25 is calculated as shown in <strong>Figure 5</strong> below:</p>
<p><img alt="" src="/blog/Portals/11/Middle%20Table.jpg" style="width: 241px; height: 118px;" title="" /></p>
<p><em><small><strong>Figure 5</strong>: Calculating the pixel value for the horizontal direction of a midpoint with pixel value 25. (Source: Original artwork created for Mouser)</small></em></p>
<p>Formula:</p>
<p><img alt="" src="/blog/Portals/11/Formula%20before%20Figure%206.png" style="width: 374px; height: 66px;" title="" /></p>
<p>Therefore, the vertical orientation of a point with a pixel value of 25 is calculated as shown in <strong>Figure 6</strong>:</p>
<p><img alt="" src="/blog/Portals/11/Last%20Table.jpg" style="width: 238px; height: 122px;" title="" /></p>
<p><em><small><strong>Figure 6</strong>: Calculating the pixel value for the horizontal direction of a midpoint with pixel value 25. (Source: Original artwork created for Mouser)</small></em></p>
<p>Formula: </p>
<p><img alt="" src="/blog/Portals/11/Formula%20After%20Figure%207_1.png" style="width: 404px; height: 39px;" title="" /> </p>
<p>4. The corresponding gradient direction is calculated using the following formula:</p>
<p><img alt="" src="/blog/Portals/11/Formula%20in%20Step%204_1.png" style="width: 272px; height: 59px;" title="" /></p>
<p>5. By repeating Steps 3 through 4 in the calculation process for all the pixels in each cell and summing the values, we obtain a gradient integration plot in nine gradient directions for each cell (<strong>Figure 7</strong>).</p>
<p>6. We solve for the HOG features of the image block, meaning that we concatenate the included cell features together.</p>
<p>7. We then solve for the HOG feature of the whole image, meaning that we concatenate the included image block features.</p>
<p>8. Method of calculating feature dimensions:</p>
<p>The block in the above example is moved four steps each in both the x and y directions:</p>
<p>(40-16)/8+1=4</p>
<p>Each block includes four cells:</p>
<p>2*2=4</p>
<p>Formula for calculating feature dimensions:</p>
<p><img alt="" src="/blog/Portals/11/Formula%203%20in%20Step%2010.png" style="width: 470px; height: 43px;" title="" /></p>
<p>Thus, the feature dimension calculated for the current image example corresponds to 576.</p>
<p>9. We then normalize the obtained gradient vector. The key goal of normalization is to prevent overfitting, which can result in good classification of the training set but extremely low test set detection rates, a situation that is obviously unacceptable for our purposes. Using the same approach we use for the normalization of machine-learning features, if we obtain an eigenvalue distributed between (0, 200), for example, we need to prevent numbers under 200 from affecting the overall distribution of features (the model will deviate from the overall trend to fit 200, resulting in overfitting), so we need to normalize the feature distribution to a certain interval. Dalal mentions in his paper that the results obtained using L2-norm are highly satisfactory.</p>
<p>(Here, 0, 200 represents the range of the eigenvalue)</p>
<p>10. Features along with corresponding labels are sent to the SVM to train the classifier.</p>
<h2>Histogram Gradient Direction and Bin Values</h2>
<p>Dalal mentions in his paper that "the purpose of this step is to provide an indication of the direction of the quantization gradient of the function for the local image region while remaining able to maintain weak sensitivity to the appearance of the detected object in the image."</p>
<p>The gradient size is inserted into the corresponding bin based on the direction of the gradient, and there are two possible methods for defining direction.</p>
<p>An unsigned approach is suitable for vehicle or other object detection, while a signed approach has been experimentally proven to be unsuitable for vehicle or other object detection. However, this approach can be useful when the image is zoomed in, zoomed out, or rotated, after which the pixels are returned to their original positions. Refer to the fifth link included in the references for a more in-depth look.</p>
<p>1. Unsigned: (0, π)</p>
<p>Below we take a closer look at unsigned interpolation.</p>
<p>In this article, we will use three tables to explain how interpolation is carried out. In each table, the first row shows the calculated amplitude and the second row specifies the directional value of the bin which is obtained by dividing 180 degrees by the number of defined bins. The third row shows the bin sequence numbers, starting with 0.</p>
<p>The image can be split into as many bins as necessary. For example, when it is divided into nine bins, i.e., when a gradient histogram with nine directions per cell is used, each bin covers an area of 20 degrees. Amplitudes (calculated above) are inserted into each bin, and the final sum of the amplitudes in each bin corresponds to the vertical axis of the histogram, while the horizontal axis corresponds to the range of bin values, in this case (0, 8).</p>
<h2>Interpolation Method:</h2>
<p>If a pixel has an amplitude of 80 and an orientation of 20 degrees, these values are inserted into the corresponding positions in the blue area of <strong>Table 1</strong>.</p>
<p><em><small><strong>Table 1</strong>: Values to insert in the corresponding positions if a pixel has an amplitude of 80 and an orientation of 20 degrees.</small></em></p>
<p><img alt="" src="/blog/Portals/11/Table%201.jpg" style="width: 585px; height: 131px;" title="" /></p>
<p>If the amplitude is 80 and the direction is 10 degrees, then these values are inserted separately into the two positions in the blue areas in <strong>Table 2</strong>.</p>
<p><em><small><strong>Table 2</strong>: Values to insert in the corresponding positions if a pixel has an amplitude of 80 and an orientation of 10 degrees.</small></em></p>
<p><img alt="" src="/blog/Portals/11/Table%202.jpg" style="width: 600px; height: 135px;" title="" /></p>
<p>If the amplitude is 60 and the direction is 165 degrees, then these values are separately inserted into the two positions in the blue areas in <strong>Table 3</strong>.</p>
<p>(180 degrees and 0 degrees are equivalent in direction, so the amplitude is inserted into two bins at a ratio of 1:3 each)</p>
<p><em><small><strong>Table 3</strong>: Values to insert in the corresponding positions if the amplitude is 60 and the direction is 165 degrees.</small></em></p>
<p><img alt="" src="/blog/Portals/11/Table%203.jpg" style="width: 600px; height: 135px;" title="" /></p>
<p><strong>Table 1</strong> above shows the interpolation method used when the direction value is exactly the same as the corresponding value of the bin, <strong>Table 2</strong> shows the interpolation method used when the direction value falls between two bin values, and <strong>Table 3</strong> shows the interpolation method used when the direction value is greater than the maximum bin value. According to the principles underlying the three methods when performing calculations using cells as the unit of measure, the amplitude of all pixels in a cell is accumulated after traversal. For example, in the sample above we obtained two amplitude values of 40 and 15 at bin 0, so our histogram for bin 0 has so far accumulated up to 55. The amplitudes of each cell from bins 1 through 8 are calculated and summed in the same manner. We ultimately end up with a histogram similar to the following (<strong>Figure 7</strong>), with horizontal coordinate X indicating gradient direction and vertical coordinate Y indicating gradient amplitude.</p>
<p><img alt="" src="/blog/Portals/11/Figure%208.jpg" style="width: 542px; height: 345px;" title="" /></p>
<p><em><small><strong>Figure 7</strong>: An example of a gradient histogram: The horizontal axis corresponds to the bin number, and the vertical axis corresponds to the calculated amplitude. The numbers corresponding to the vertical axis of the figure are given for illustrative purposes only. For example, the bin 1 amplitude values in the three tables above are summed to get 80 + 40 + 0 = 120 as the final result, and we obtain 120 for the histogram vertical coordinates. As we continue to calculate the value of bin 1, the value will continue to accumulate. (Source: Original artwork created for Mouser)</small></em></p>
<p>Experimental results have shown that the best results are obtained when using nine bins for target detection and unidirectional interpolation.</p>
<p>2. Signed: (0, 2π)</p>
<p>When adding a plus or minus sign before the directional value, if nine bins are also defined, then the angular range assigned to each bin is: (0, π/9°). For example, for the second bin, which has a positive value interpolated into the range 20–40 (the blue sector), a negative value should be interpolated into the 200–220 bin (the blue sector). (<strong>Figure 8</strong>)</p>
<p><img alt="" src="/blog/Portals/11/x%20y%20axis.jpg" style="width: 600px; height: 385px;" title="" /></p>
<p><em><small><strong>Figure 8</strong>: In signed interpolation, each sector represents a bin coverage angle range value. Red corresponds to bin number one, which covers 0 to 20 degrees. Moving clockwise in order, green marks the end at 340–360 degrees. The blue area indicates the corresponding bins in both directions. (Source: Original artwork created for Mouser)</small></em></p>
<h2>An Overview of the SVM Workflow</h2>
<p>An SVM (support vector machine) separates two classes in space through a hyperplane, and the two-dimensional space can be understood simply as the space where y is found and y satisfies:</p>
<p><img alt="" src="/blog/Portals/11/y%20is%20found%20and%20y%20satisfies.png" style="width: 157px; height: 34px;" title="" /></p>
<p>The y value determines whether the sample is classified as positive or negative. However, in order to determine the optimal hyperplane, here we introduce support vectors and maximum intervals. Our goal is to find a hyperplane such that there is maximum separation between the points closest to the hyperplane (<strong>Figure 9</strong>).</p>
<p><img alt="" src="/blog/Portals/11/Red%20Line.jpg" style="width: 543px; height: 531px;" title="" /></p>
<p><em><small><strong>Figure 9</strong>: The red line corresponds to the hyperplane, the points on either side of the dashed line are support vectors, and the calculated value is 1 (positive class) or -1 (negative class). The blue point corresponds to a positive sample, and the green point corresponds to a negative sample. The goal is to find the value exhibiting the largest distance between the dashed lines, since a larger distance indicates a better binary classification model. (Source: Wikipedia)</small></em></p>
<p>Because of the high complexity of data presented in real situations, kernel functions are sometimes introduced as needed in order to map low dimensionality onto high dimensionality and render linearly inseparable data (<strong>Figure 10</strong>) linearly separable by finding the optimal hyperplane.</p>
<p><img alt="" src="/blog/Portals/11/Red%20circle.jpg" style="width: 600px; height: 362px;" title="" /></p>
<p><em><small><strong>Figure 10</strong>: The points in the red circle are linearly inseparable from the points in blue in two-dimensional space, so a kernel function is needed to map the points into a higher-dimensional coordinate system. (Source: Original artwork created for Mouser)</small></em></p>
<p>SVMs are computationally intensive and time-consuming to train because the similarity to every other point is computed for each point. Therefore, SVMs are suitable for training binary classification models with small amounts of data, and if multiple categories are involved, multiple models are usually trained separately. In addition, two open-source tools developed by professors at National Taiwan University are now very popular among scientists. The first is LibSVM, and the other one is Liblinear, which was developed based on SVM techniques for large amounts of data.</p>
<p>SVMs are extremely parameter-sensitive. During the training of LibSVM or LibLinear, it is important to pay close attention to the penalty term C and the weighting factor w. C is the penalty term and the larger it is, the better the classification effect during the training process. However, when C is too large, overfitting can occur, meaning that training sample classification accuracy will be extremely high but test accuracy will be extremely low. There will inevitably exist data points that lie far away from the central cluster of the set, and the size of C indicates our willingness to drop these outliers. A larger value for C indicates we are reluctant to discard these outliers, so the model would fit particularly well to the training set but not the test set. W, or weight, represents the coefficient for positive and negative samples, and if we want more targets to be detected, we can increase the positive sample weight. However, doing so will result in a false detection rate (FP) that is particularly high. Conversely, an increased negative sample weight will reduce the false detection rate (FP), but the target detection rate (TP) will naturally also decrease.</p>
<p>A simple experiment carried out by the author found that using a million data points and 1152-dimensional features, it took 20 minutes to open 18 threads for training using two CPUs and 60G of RAM while running Windows 10. Therefore, it is recommended to either use the Liblinear library to train on a very large corpus or increase the available computer memory.</p>
<div>
<h2>Comparison and Summary</h2>
</div>
<p>In this article, we focused on understanding feature calculation in the context of vehicle detection and briefly explored the SVM classification strategy. When HOG features are used for vehicle detection, the use of unsigned interpolation of features with more than 1,000 dimensions in nine bins is recommended, and an uneven feature distribution must be normalized. For SVM, kernel functions can be selected as needed, and the LibSVM library can be used to train models based on very large amounts of data if necessary. The SVM's kernel function mechanism for mapping low dimensional spaces to high dimensional spaces effectively solves the problem of linear inseparability. The computational complexity of the SVM is determined by the number of support vectors and the final decision function is fortunately determined by a small number of support vectors. SVMs also have their limitations. If you omit the use of the LiBSVM/Liblinear open-source library, the processing of large amounts of data can be very difficult when using SVM alone. This is because the SVM calculation process involves matrix calculations, and the number of rows and columns is determined by the number of samples. Therefore, large samples consume a lot of time and space during the calculation process. Similarly, in practice, the choice of whether or not to leverage HOG should be based on a two-way consideration of the technique's advantages and disadvantages. These advantages and disadvantages are summarized here for the reader's reference.</p>
<ul>
<li><strong>Advantages:</strong> HOG is done on a local unit basis, which can better capture local shape information while ignoring lighting, color and other factors. For example, the color of a car can be ignored during vehicle detection, thus reducing the required number of feature dimensions, and because of the technique's weak sensitivity to light, vehicles can still be detected even when partially blocked from view.<br />
</li>
<li><strong>Disadvantages:</strong> HOG is not very good at dealing with occlusion, and changes in vehicle direction are not easy to detect. Because of the nature of the gradient, HOG is quite sensitive to noise, so after blocks and cells are split into local area units, it is often necessary in practice to perform Gaussian smoothing to remove noise. The determination of feature dimensions (cell, block, step size) is very demanding, and in practice, it is necessary to make multiple attempts to achieve an optimal solution.</li>
</ul>
<p>I hope that this article has given you a clearer idea of the current state of the art in vehicle detection. Although combining SVM and HOG is computationally intensive, the corresponding cost is low, and training of the model can be done on a standard CPU, which makes the use of this approach very popular among small and medium-sized product development companies. For example, the development and output of small parts such as onboard cameras is likely to continue to yield better price-performance while also ensuring an acceptable level of usability.</p>
1555