Street view number detection is callednatural scene text recognition problem which is quite different from printedcharacter or handwritten recognition. Research in this field was started in90’s, but still it is considered as an unsolved issue. As I mentioned earlierthat the difficulties arise due to fonts variation, scales, rotations, lowlights etc.
In earlier years to deal with naturalscene text identification sequentially, first character classification bysliding window or connected components mainly used. 4 After that wordprediction can be done by predicting character classifier in left to rightmanner. Recently segmentation method guided by supervised classifier use wherewords can be recognized through a sequential beam search. 4 But none of thiscan help to solve the street view recognition problem.
In recent works convolutional neuralnetworks proves its capabilities more accurately to solve object recognitiontask. 4 Some research has done with CNN to tackle scene text recognitiontasks. 4 Studies on CNN shows its huge capability to represent all types ofcharacter variation in the natural scene and till now it is holding this highvariability. Analysis with convolutional neural network stars at early 80’s andit successfully applied for handwritten digit recognition in 90’s.
4 With therecent development of computer resources, training sets, advance algorithm anddropout training deep convolutional neural networks become more efficient to recognizenatural scene digit and characters. 3 Previously CNN used mainly to detecting asingle object from an input image. It was quite difficult to isolate eachcharacter from a single image and identify them. Goodfellow et al., solve thisproblem by using deep large CNN directly to model the whole image and with asimple graphical model as the top inference layer. 4 The rest of the paper is designed insection III Convolutional neural network architecture, section IV Experiment,Result, and Discussion and Future Work and Conclusion in section V. Convolutional Neural Networks(CNN) is a multilayer network to handle complex and high-dimensional data, itsarchitecture is same as typical neural networks. 8 Each layer contains someneuron which carries some weight and biases.
Each neuron takes images asinputs, then move onward for implementation and reduce parameter numbers in thenetwork. 7 The first layer is a convolutional layer. Here input will beconvoluted by a set of filters to extract the feature from the input. The sizeof feature maps depends on three parameters: number of filters, stride size,padding. After each convolutional layer, a non-linear operation, ReLU use. Itconverts all negative value to zero. Next is pooling or sub-sampling layer, itwill reduce the size of feature maps.
Pooling can be different types: max,average, sum. But max pooling is generally used. Down-sampling also controlsoverfitting. Pooling layer output is using to create feature extractor. Featureextractor retrieves selective features from the input images.
These layers willhave moved to fully connected layers (FCL) and the output layer. In CNNprevious layer output considers as next layer input. For the different type ofproblem, CNN is different. Themain objective of this project is detecting and identifying house-number signsfrom street view images. The dataset I am considering for this project isstreet view house numbers dataset taken from 5 has similarities with MNISTdataset. The SVHN dataset has more than 600,000 labeled characters and theimages are in .png format.
After extract the dataset I resize all images in32x32 pixels with three color channels. There are 10 classes, 1 for each digit.Digit ‘1’ is label as 1, ‘9’ is label as 9 and ‘0’ is label as 10. 5 Thedataset is divided into three subgroups: train set, test set, and extra set.The extra set is the largest subset contains almost 531,131 images.Correspondingly, train dataset has 73,252 and test data set has 26,032 images. Figure 3 is an example of the original,variable-resolution, colored house-number images where each digit is marked bybounding boxes. Boundingbox information is stored in digitStruct.
mat file, instead of drawn directly onthe images in the dataset. digitStruct.mat file contains a struct calleddigitStruct with the same length of original images. Each element indigitStruct has the following fields: “name” which is a string containing thefilename of the corresponding image. “bbox” is a struct array that contains theposition, size, and label of each digit bounding box in the image. As an example,digitStruct(100). bbox (1). height meansthe height of the 1st digit bounding box in the 100th image.
5 This is very clearfrom Figure 3 that in SVHN dataset maximum house numbers signs are printedsigns and they are easy to read. 2 Because there is a large variation infont, size, and colors it makes the detection very difficult. The variation ofresolution is also large here. (Median: 28 pixels. Max: 403 pixels. Min: 9pixels).
2 The graph below indicates that there is the large variation incharacter heights as measured by the height of the bounding box in originalstreet view dataset. That means the size of all characters in the dataset,their placement, and character resolution is not evenly distributed across thedataset. Due to data are not uniformly distributed it is difficult to makecorrect house number detectionIn myexperiment, I train a multilayer CNN for street view house numbers recognitionand check the accuracy of test data. The coding is done in python usingTensorflow, a powerful library for implementation and training deep neuralnetworks. The central unit of data in TensorFlow is the tensor. A tensorconsists of a set of primitive values shaped into an array of any number ofdimensions.
A tensor’s rank is its number of dimensions. 9 Along withTensorFlow used some other library function such as Numpy, Mathplotlib, SciPyetc. I perform myanalysis only using the train and test dataset due to limited technical resources.And omit extra dataset which is almost 2.7GB. To make the analysis simpler deleteall those data points which have more than 5 digits. By preprocessing the datafrom the original SVHN dataset a pickle file is created which being used in myexperiment.
For the implementation, I randomly shuffle valid dataset and thenused the pickle file and train a 7-layer Convoluted Neural Network. At the verybeginning of the experiment, first convolution layer has 16 feature maps with5x5 filters, and originate 28x28x16 output. A few ReLU layers are also addedafter each convolution layer to add more non-linearity to the decision-makingprocess. After first sub-sampling the output size decrease in 14x14x10. Thesecond convolution has 512 feature maps with 5×5 filters and produces 10x10x32output.
By applying sub-sampling second time get the output size 5x5x32.Finally, the third convolution has 2048 feature maps with same filter size. Itis mentionable that the stride size =1 in my experiment along with zero padding.During my experiment, I use dropout technique to reduce the overfitting.Finally, SoftMax regression layer is used to get the final output. Weights areinitialized randomly using Xavier initialization which keeps the weights in theright range. It automatically scales the initialization based on the number ofoutput and input neurons. After model buildup, start train the network and logthe accuracy, loss and validation accuracy for every 500 steps.
Once the processis done then get the test set accuracy. To minimize the loss, Adagrad Optimizer used.After reach in a suitable accuracy level stop train the network and save thehyperparameters in a checkpoint file. When we need to perform the detection, theprogram will load the checkpoint file without train the model again. Initially,the model produced an accuracy of 89% with just 3000 steps. It’s a greatstarting point and certainly, after a few times of training the accuracy will reachin 90%. However, I added some additional features to increase accuracy. First, addeda dropout layer between the third convolution layer and fully connected layer.
Thisallows the network to become more robust and prevents overfitting. Secondly, introducedexponential decay to calculate learning rate with an initial rate 0.05. It willdecay in each 10,000 steps with a base of 0.95.
This helps the network to takebigger steps at first so that it learns fast but over time as we move closer tothe global minimum, it will take smaller steps. With these changes, the modelis now able to produce an accuracy of 91.9% on the test set.
Since there are alarge training set and test set, there is a chance of more improvement if themodel will train for a longer time.