Inspection of carry-on bag scans for security at airports is currently manual. This results in long wait lines, frustrated travelers and susceptibility to human error. At present, Computational Tomography (CT) is used to scan carry-on bags for items and then a security officer visualizes each of the scans and decides whether the carry-on has any unsafe items in it or not. Manual visualization of the scanned images takes time, and, since it’s a manual process, chances of human error is high. There is also no uniform analysis of scanned images because of varying decision making from one security officer to another.
A New Security Screening Solution
So how about we try something new? We could incorporate an Artificial Intelligence (AI) based solution in current CT scan machines at airports, which could automatically distinguish between safe and unsafe items from the scanned images of carry-on bags at airports.
The major advantages of this system:
- Shorter wait lines and less frustrated travelers. AI based automated recognition of unsafe carry-on bags will make the carry-on security check process faster and help reduce traveler frustration. And when passengers are not waiting in lines, they might be at restaurants or shops at the airport, which could add to revenue generation.
- Higher accuracy and better security because of reduced human error. A high level of accuracy will be possible through the use of machine learning algorithms as well as the regular upgrade of the classifiers used by those algorithms as labeled data becomes available for re-training them. Once the algorithms are re-trained, their deployment on the CT scan machines in the field can be done cheaply and quickly without disrupting the field operations in a significant way.
So, now you might be thinking, that sounds a natural progression, hasn’t anyone already researched this field? It’s not that research has not been done in this field. The U.S. Department of Homeland Security (DHS) has teamed up with Google and its crowdsourcing site, Kaggle, to search for new algorithms to identify concealed objects detected by airport security body scanners. The problem that DHS and Google are trying to solve here is not exactly same as mentioned in this blog but is a similar concept of using AI algorithms to detect concealed dangerous objects on the human body rather than in carry-on bags.
There have been CT scan machine manufacturers that have introduced 3D CT scans that will create 3D images of carry-on bags and help security officers inspect the scanned images in 3 dimensions. This provides the security officers the capability to analyze each scanned image from various angles and make a better judgement of an object security level in the carry-ons. Though this idea is not based on AI algorithms, it is trying to solve the same problem of speeding up carry-on security bag checks at airports.
Creating the Solution
The idea is to have an artificial intelligence-based security bag-check solution that integrates with existing computational tomography (CT) scanning systems at airports. CT scans of carry-on bags at airport security yields a lot of information that security officers have to individually analyze and evaluate comprehensively in a short time. The proposed solution will use AI to help security officers make better and faster decisions by performing automated image recognition of dangerous objects in carry-on bags. It will use deep learning technology, which combines neural network architectures, to automatically identify visual differences between safe and unsafe/abnormal items in scanned images. Figure 1 shows a very high level pictorial representation of the AI solution described here, while Figure 2 and Figure 3 describe the AI algorithm in more detail.
Figure 1: Pictorial respresentation of the AI solution described in the blog
The novelty of this idea lies in the AI solution that is comprised of inference on the edge (CT scanners), with training happening in cloud. The AI solution consists of 3 major steps:
- Re-training (happening as needed)
The first step of initialization would consist of training an appropriate deep learning model before it can be deployed for use. One of the biggest challenges in this process would be to acquire enough data to train the model. The performance of neural networks (i.e. deep learning) depends on the number of input images you use to train them. For a more generic problem such as recognizing a cat in a scene, the training samples may not have to be huge to provide a reasonable level of accuracy. However, given the diverse nature of threatening items in a bag (e.g. guns, explosives, liquids, drugs, live animals, etc.) and the infinite ways in which these can be positioned in a bag, the required number of training samples will be pretty high. This challenge of training data collection can be addressed in three ways:
- Collaboration with Department of Homeland Security (DHS) to make CT scan images available for training. This would be similar to what DHS is working on with Google for body scanners as mentioned above.
- Perform continuous data collection at airports to create a large, secure pool of data to do training. Also while this data is being collected, the decisions made by the human operators after examining each bag can be used as labeling data, which establishes the “ground truth” information that makes the collected data suitable for training the models. This approach also alleviates the need to finance the offline labeling of the collected data, which can be very expensive. Because CT scan machines used at airports are pretty much standard devices, training data and trained models used on one device can also be used on others reliably. This also ensures a high level of standardized security checks independent of the experience of the security officers based at various airports around the globe.
- Deep learning algorithms generally have feature extraction and classification as two main steps. Apart from getting real data from CT scanners at the airport, the CT scan manufacturers can work to automatically generate multiple view angles of threatening objects (e.g. guns). This will help create a subset of features for known unsafe objects, which can be fed to training model.
Figure 2: Initialization phase: Flowchart highlighting the data collection and labeling process performed by the human inspectors, with initially no AI in the loop.
Figure 2 shows a flowchart on how the AI model will be trained on safe & unsafe objects during the initialization phase. In this phase the human inspection would be happening as usual however, the human inspector would be labeling the scanned objects that will be used for model training.
In the second step, the trained model will be deployed in the CT scanners at the airports, assisting security officers to identify unsafe objects in the scanned images of carry-on bags. Depending on the CT scanner model, the deployment could potentially be just an upgrade to the current software stack of the CT scanners, with AI algorithm added on to it. After the AI algorithm has been deployed in the CT scanners it will be inferring the safety level of the scanned images of carry-on bags.
Figure 3: Flowchart showing invention algorithm in deployment phase
As shown in Figure 3, once the trained model has been deployed, every time it infers the safety level of the bag that could result in multiple states, depending on the inferencing accuracy. If the AI model safety accuracy is higher than the defined criteria, the bag will pass security check and move ahead to be picked up by the traveler. However, if the accuracy is below the required criteria, human inspection would be needed. Depending on the safety of the bag, human inspector will mark bag as safe or unsafe and this information will be sent to cloud to store that as training information.
In cases, where the AI model detects bag as unsafe, human inspection will be required. The human inspector will then decide the safety level of the bag and send the labeled information on the scanned image of bag to the cloud for storing as training data.
In the description of this solution a “Negative” would mean the bag is safe and the input image does not have any identified unsafe objects in it.
The four states mentioned in Figure 3 can be described as follows:
- True Negative: This will happen when the AI model accuracy is less than set criteria and it infers that the bag is safe and the security officer finds that to be true.
- False Negative: This will happen when AI model accuracy is less than set criteria and it infers that the bag is safe, while on cross checking the security officer finds unsafe objects in the bag.
- True Positive: When the AI model infers the input image to be unsafe and security officer agrees with that output.
- False Positive: When the AI model infers the input image to be unsafe and security officer finds that to be untrue.
The solution seeks to support security officer & not replace them. The security officers will always be responsible for the final interpretation of the scanned images. In all states of the AI algorithm, the final decision made by the security officer (ground truth), along with the raw input image and the labeled data is sent to cloud database for next batch of training data. In case of false negative & false positive, the security officer will be expected to correctly label the specific objects that AI model inferred incorrectly. This will help in continuously evolving the system to improve its decision making as data collection and labeling will be performed continuously once the intelligent CT scan machines are deployed in the field. Security officers are ultimately responsible for accepting or rejecting the image tags. The system uses this feedback to ultimately improve its accuracy and robustness as it encounters more examples.
Once the CT scanner with AI algorithm has been deployed and is being used, re-training of the model in the CT scanner will need to happen continuosly over cloud. The labeled input & output data from deployment stage will be collected and stored on the edge device (CT scanner) and sent to the cloud periodically for re-training the model, as shown in Figure 3. The updated re-trained model or the new classifier will be downloaded on the edge device on a regular basis to improve its level of inferecing accuracy.
After the AI algorithm at the edge device starts performing at expected accuracy for prolonged periods of time, then human inspection for the “negative” state of bags can be progressively removed, with only random cross-check of the results. Also, the retraining frequency of the model can be adjusted as needed.
The pieces described in this solution like image recognition, deep learning neural networks, and training and inferencing models for imaging, already exist in the ecosystem currently. The novelty of this idea will be that it will have an AI solution that integrates with current CT scanners at airports and helps automatically detect unsafe objects in scanned images of carry-on bags at airports as well as continuously learning to improve its accuracy. The author encourages the readers to further explore the applicability and benefits of this solution.
Beenish Zia, Technical Marketing Engineer at Intel Corporation
Beenish Zia is an educationist at heart and is currently working as a Technical Marketing Engineer in Data Center Group at Intel Corporation. She serves as an expert and drives technology enablement for Intel® Xeon® processor family, Artificial Intelligence (AI) and High Performance Computing (HPC). Beenish has a deep technical background in digital circuit design, Computer Aided Design (CAD), hardware prototyping and system integration especially for HPC and AI.
Outside of work Beenish is an enthusiastic supporter of increasing diversity in STEM fields. She loves to write poems and has published her book on poems. Beenish is also a student of Shotokan karate with a first degree black belt in it.