Searching surveillance video contents using convolutional neural network

Received Feb 24, 2020 Revised Aug 4, 2020 Accepted Nov 4, 2020 Manual video inspection, searching, and analyzing is exhausting and inefficient. This paper presents an intelligent system to search surveillance video contents using deep learning. The proposed system reduced the amount of work that is needed to perform video searching and improved the speed and accuracy. A pre-trained VGG-16 CNNs model is used for dataset training. In addition, key frames of videos were extracted in order to save space, reduce the amount of work, and reduce the execution time. The extracted key frames were processed using the sobel operator edge detector and the max-pooling in order to eliminate redundancy. This increases compaction and avoids similarities between extracted frames. A text file, that contains key frame index, time of occurrence, and the classification of the VGG-16 model is produced. The text file enables humans to easily search for objects of interest. VIRAT and IVY LAB datasets were used in the experiments. In addition, 128 different classes were identified in the datasets. The classes represent important objects for surveillance systems. However, users can identify other classes and utilize the proposed methodology. Experiments and evaluation showed that the proposed system outperformed existing methods in an order of magnitude. The system achieved the best results in speed while providing a high accuracy in classification.


INTRODUCTION
Video content analysis (VCA) is the method of analyzing video streams to detect and determine temporal and spatial events to find what a video represents and the type of information it has [1]. It is used by different applications that require high security to detect intruders and abnormal events especially fraud actions [2]. Airports, hotels, banks, and other public places need VCA to ensure a secure environment for clients and staff. Market owners can improve their productivity by understanding their customer reactions, needs, and desires. Moreover, VCA is of high importance in subway stations to detect dangerous situations and secure the areas [3]. Transportation systems rely on VCA to ensure passengers security, vehicle control, and better tracking methods [4]. Furthermore, video analysis is used to detect underwater objects [5]. Hence, video content searching systems must be able to search the video in a fast way. However, the amount of data that is reported by surveillance cameras is huge, which puts a challenge on the search process. It is important to organize and search the contents of videos in a way where users can find objects of interest within a short time. Deep convolutional neural networks (CNNs) are used in different domains such as image classification [6,7], object detection [8,9], and natural language processing [10,11]. The networks are computationally intensive tasks. However, they provide high accuracy in object detection. CNNs contain three types of layers which are: convolutional layers, pooling layers, and fully connected layers. In some applications, an activation function (i.e. rectified linear unit (ReLU)) layer follows the convolutional layers. There have been successful applications of CNNs for image classification such as AlexNet model which achieved 15.3% top-5 test error rate in ILSVRC-2012 [6]. In addition, ZF Net [12] and GoogLeNet model [13] achieved excellent performance. VGG-16 model and ResNet model were designed by He et al. for object detection [14,15]. CNNs are also used in face recognition such as the work by Li et al. [16]. Li et al. proposed a multi-resolution CNN cascade for fast face detection. Furthermore, Sun et al. proposed two deep neural network architectures, DeepID3, and achieved 99.53% accuracy in LFW face verification and 96.0% LFW rank-1 face identification [17]. In addition, CNNs have been used for medical image analysis [18,19]. In 1994, the CNNs were used to detect the micro-calcifications in digital mammography [20].
Much of the past and on-going research aims to analyze video contents using different methods. Lao et al. proposed a system for semantical analysis of human behaviors in a monocular surveillance video captured by a consumer camera [21]. The authors incorporated a trajectory estimation method besides human-body modeling to comprehend the semantic analysis of human activities and events in video sequences. Zhao and Cai employed a short-time memory model to segment a given video and to specify the scene importance for key frames extraction [22]. Bertini et al. presented a framework for event and object extraction of soccer videos [23]. The authors applied semantic transcoding to the frames that contain events and human faces. Four classes of events were detected in their framework. Kolekar introduced a probabilistic approach for video analysis and indexing, based on bayesian belief network (BBN) [24]. They used a hierarchal classification framework to extract features from videos and then the BBN assigns the semantic label for each event in video clips. Furthermore, Chen and Zhang proposed a video content analysis system using autoregressive (AR) modeling to model the feature sequence of frames over time [25]. Sun et al. introduced a video analysis method that depends on color distributions between frames [26]. The distributions are used to search the video frames.
Furthermore, Sharif et al. proposed a detection system using entropy measure to partition a video into small spatial-temporal patches [27]. However, their system measures the background features only and does not assess the behaviors of individuals and moving objects. Cernekov et al. extracted key frames using mutual information and joint entropy for ease of search of video contents [28]. Zeng et al. applied a blockbased markov random field (MRF) model to segment the moving objects obtained from video frames to analyze video contents; and used backtracking to select the key frames [29]. Zhou et al. proposed a nonuniform sampling method as well as a simple uniform sampler (Uni) for summarizing long video content [30]. Afterward, the proposed sampling method extracts important features and produce a short video where users can search it faster. Their system takes a second to retime each video and ten seconds to render each frame. On the other hand, Bai et al. introduced a video semantic content analysis framework that depends on domain ontology [31]. The authors used low-level algorithms to extract both high level and low-level features in the videos. The video event detection is performed manually. Foggia et al. introduced a fire detection system analysis for surveillance videos [32]. Their proposed system relies on color, shape variation, and motion analysis to detect a fire. They used YUV color space and the scale-invariant feature transform (SIFT) descriptors for blobs movements' detection. Then the multi-expert system (MES) produces the prediction. The system achieved a considerable rate of false positives. This paper proposes a system for searching surveillance video contents (SSVC). SSVC system uses CNN for object recognition and classifications. Specifically, it uses the VGGNet model which was developed by Oxford's visual geometry group (VGG) [14]. The model scored the first place in image localization and the second place in image classification [33]. There are different configurations of the VGGNet such as VGG-16 and VGG-19. VGG-16 has 13 convolutional layers and 3 fully connected layers. SSVC system uses VGG-16. It generates a text file that contains different classes of the detected objects and the time of appearance of each object in the video, as well as the frame index. To improve the performance, SSVC system processes only a specific number of frames that hold most of the information (key frames). Furthermore, a matching process is performed to eliminate redundant key frames. As a result, VGG-16 eliminates many of the extracted frames, which improves the speed of the system. VIRAT and IVY LAB datasets are used in the experiments. Results show that SSVC system outperforms previously proposed methods in terms of speed. The rest of this paper is organized as follows. Section 2 presents the methodology of the proposed system and the different techniques that are employed to enhance video analysis. Additionally, the experiments are presented to evaluate each technique. Section 3 concludes the paper.

METHODOLOGY AND EXPERIMENTAL RESULTS
MATLAB is used for the implementation and training of the pre-trained CNN model. In addition, FFmpeg is used for key frames extraction like the work in [34]. FFmpeg supports different video and audio formats. For CNN re-training, NVIDIA Tesla K80 is used which has 2496 CUDA cores.

Methodology
In this work, the VGG-16 is re-trained using a dataset that contains 48000 images. The VGG-16 model was not trained from scratch for time saving purposes. In addition, training the VGG-16 model on a relatively small dataset causes network overfitting. Hence, a transfer learning is used for a VGG-16 model that was trained on a very large dataset (ImageNet, which contains 1.2 million images with 1000 categories). The last fully-connected layer of the trained model was removed and tailored for the 128 categories of our system. Also, appropriate training parameters values were selected. The values were chosen manually with trial and error while monitoring the performance and the accuracy of the VGG-16 model.
The training dataset is chosen according to the probability of appearances in surveillance videos. Table 1 lists the categories that were selected in our VCA system. These objects are of interest in surveillance videos. Nevertheless, users can add or remove objects as they prefer and still follow the methodology of the proposed system. The system considers 128 different objects. Some of the selected objects are considered dangerous and may harm humans such as fire, smoke, guns, revolvers, nails, and drill tools. In addition, animals like scorpions, snakes, and spiders are considered. The dataset also has normal objects that are mainly used by humans like cars, motorbike, bags, cameras, mobile phones, computers, ATMs, iron, and vacuum cleaner. Some examples of objects that were excluded are sea objects, clothes, and musical instruments. After the dataset has been prepared, each image is labeled and used in the training of the pre-trained VGG-16 model. Appropriate training parameters are assigned and used during the training of the VGG-16 model. Table 2 shows the assigned values for the most important training parameters of the VGG-16 model. The assigned values were tested and verified to provide the best results through trial and error. These parameters are important to fit the training images in the memory and to avoid overfitting of the CNN network.
The script is executed 300 times Table 2. At each iteration, 160 images are chosen at random from the training set. The network layers are trained over these images and the prediction is compared against the ground truth. Figure 1 shows the training results. The network reaches an accuracy of 88.91%. Figure 2 and Figure 3 shows different testing examples of the trained network where the network was able to correctly identify objects.   Figure 3. Examples of objects that were correctly detected by the VGG-16

Speeding up the VCA system
To enhance the speed of the VCA system, three techniques were used to improve the performance. The techniques are extraction key frames, sobel edge detector, and Max-pooling as follows.

Key frames
In a video, the entire information exists in the (I) frame, which is called also the key frame. The set of key frames is similar to having a short representation of a video. To increase the speed of the proposed system, only key frames were used. FFmpeg program is used to extract the frames. The extraction is performed for different frame rates such as 25, 30, and 60 frames per second. Moreover, Different surveillance Similarities in different sequences of the key frames were found. Hence, the sobel detector is used for enhancement to gain more discrimination of the frames.

Sobel edge detector
The second step in speeding up the proposed system is using the sobel operator edge detector to determine most useful representation of the key frames. Sobel operator detects the edges in the extracted key frames, which is used later to find matchings between consecutive frames. If the sequences of frames have matchings according to a threshold, then the system eliminates the second frame because there is no need to classify it by VGG-16 network. Sobel operator has a high accuracy in detecting edges of images. It uses two 3×3 kernels. The first one is for the horizontal and the second one is for the vertical differences [35]. Figure 4 shows sobel edge detector filters. The and filters are convolved with the image and result in a gradient of a magnitude that is computed using (1). The gradient direction is computed using (2) [35]. The sobel edge detector is applied to the extracted key frames and the edges are then computed. The output image has the same size as the original one (i.e. 224×224).

Max-pooling
After applying the sobel edge detector, a 3×3 max-pooling sliding window is applied to find the maximum number of the area. The extracted features are compared to make the matching decision more accurate. Similar frames are ignored. This method speeds up the system since the analysis and classification are performed on a reduced number of key frames. Max-pooling sliding window of size 3×3 and a frame size of 224×224 was used. The matching decision is based on the number of matching pixels relative to the total number of pixels as shown in (3).
A threshold of 0.7 is used for the matching percentage. If the total match percentage is less than 0.7, then the frame is different from the next frame and needs to be analyzed. Table 3 shows different threshold values that were tested. A value of 0.7 was found to be relatively better in producing accurate results for the key frames. To validate the selected threshold value, OneLeaveShop2cor video from CAVIAR dataset [36] is used. The video contains 63 key frames. However, there are similarities between different frames. Table 4 shows sample images using the threshold value of 0.7. When using the values 0.9 or 1.0 for the threshold, the similarity becomes less than the threshold and all of the frames will be selected and classified.
The performance of eliminating redundant frames is applied to an outdoor surveillance video with a length of 68 seconds. One extracted frame takes 0.135359 seconds to be classified by the VGG-16 network. The proposed system takes 16.804304 seconds to analyze the video without the application of the sobel detector and max-pooling methods. On the other hand, it takes 10.504688 seconds when the enhanced methods are used. This shows how the sobel detector and max-pooling enhance the system speed without altering accuracy. The different steps of the proposed system are summarized as shown in Figure 5.  Table 4. Testing the threshold o.7 using OneLeaveShop2cor video 0.7 Figure 5. The framework of the proposed system

Testing and evaluation
The video content analysis system is tested on a variety of surveillance videos including different objects, movements, and events. The selected videos are taken from the image video system lab (IVY) dataset [37] and video image retrieval analysis tool (VIRAT) video dataset [38]. The experimental results are shown in Table 5. The proposed system is tested using nine videos. These videos are different from each other in time, frame rate, and the number of key frames. The execution time includes the loading of the trained network, extracting frames, applying sobel detector, applying max-pooling, the matching stage, and saving of the results into a text file. The experiments were performed using Intel i7 PC and Tesla K80 GPU card. The VGG-16 network requires 11 ms on average to classify one frame. Videos numbers 7, 8, and 9 have originally 519, 612, and 31 key frames, respectively. However, using our mechanisms of matching and ignoring similar frames, the proposed system classified 29, 11, and 12 frames, respectively. Inspecting the videos reveals that they were recorded by fixed cameras and the scene of the site has little information with small transitions. Hence, the proposed system is useful for searching video contents. In addition, using Tesla K80 speeds up the system 12X than using CPU only. The output text file for video 2 is listed below. Video 2 contains 56 key frames (I) that represent two children walking in the hallway. The frames that were analyzed and classified were 12 as a result of applying the sobel filter and max-pooling. The proposed system was able to analyze the frames in 0.71 seconds using the GPU and 7.80 seconds using the CPU. As seen from the text file of video 2, there was no useful information before time 17.20 seconds. Then, the children appear in the video. This is an efficient way to find objects of interest videos instead of searching manually and spending a lot of time. A user can search within the text file to find a time of interest and then fast forward the video (e.g. to time 17.2 secs). The text file for video 9 (ATM machine) is shown below. Video 9 contains 31 original key frames (I) that represent the area of the ATM machine. The classified frames were only 12 frames. Again, our system was able to analyze the video very fast (0.72 seconds on GPU and 9.25 seconds on CPU) when compared to the length of the video (59 seconds).  Table 6 shows a comparison between our system and three systems that were proposed in the literature. Lee et al. proposed a video content method using a regression model that detects important objects and organizes them as a sequence of images [39]. The system in [39] needs 1 second for each frame to be analyzed with an accuracy of 68.75%. Meghdadi and Irani introduced an analysis system that processes 3 frames per second with an accuracy of about 69% [40]. The system in [40] works by extracting multiple frames based on motion and then visualizing trajectories of the objects in surveillance streams. Zhou et al. proposed an approach for video analysis using a space-time saliency method and a faster re-timing method to tackle long video indexing and summarization [30]. The system in [30] requires 11 seconds to process one frame. Table 6 shows that the proposed system outperforms the related work with an order of magnitude. The proposed system processes 6 frames per second. The number of frames of the matching process can reach up to 40 frames per second.  [39] 1 Frame per second Meghdadi and Irani [40] 3 frames per second Zhou et al. [30] 1 frame per 11 second The proposed SSVC system 6 frames per second

CONCLUSION
A new system for video content analysis has been proposed. The system uses VGG-16 deep convolutional neural network for object detection and identification. The number of different classes of objects that were considered for security and surveillance systems is 128. However, users can define more classes of their interest. In addition, three techniques were used to reduce the amount of information that needs to be processed by the VGG-16 network. The three techniques are extraction of key frames using FFmpeg, sobel edge detector, and Max-pooling. The sobel detector and max-pooling are used in order to further reduce the number of frames that are needed for processing. The output of the system is a text file that is readable by humans so they can search in it for their objects of interest. The text file contains the objects and movements that occur in a video. Instead of manual video inspection for long periods, the presented system provided users with an easy and simple way for video content searching. The time to produce the text file is negligible compared with the size and time of the analyzed video. The results have showed that the proposed system outperformed existing methods in an order of magnitude.