• AIPressRoom
  • Posts
  • Bettering asset well being and grid resilience utilizing machine studying

Bettering asset well being and grid resilience utilizing machine studying

This put up is co-written with Travis Bronson, and Brian L Wilkerson from Duke Power

Machine studying (ML) is reworking each trade, course of, and enterprise, however the path to success will not be all the time simple. On this weblog put up, we exhibit how Duke Energy, a Fortune 150 firm headquartered in Charlotte, NC., collaborated with the AWS Machine Learning Solutions Lab (MLSL) to make use of pc imaginative and prescient to automate the inspection of picket utility poles and assist forestall energy outages, property harm and even accidents.

The electrical grid is made up of poles, traces and energy vegetation to generate and ship electrical energy to tens of millions of houses and companies. These utility poles are crucial infrastructure elements and topic to numerous environmental elements reminiscent of wind, rain and snow, which may trigger put on and tear on property. It’s crucial that utility poles are often inspected and maintained to stop failures that may result in energy outages, property harm and even accidents. Most energy utility firms, together with Duke Power, use handbook visible inspection of utility poles to identifyanomalies associated to their transmission and distribution community. However this methodology could be costlyand time-consuming, and it requires that energy transmission lineworkers comply with rigorous security protocols.

Duke Power has used synthetic intelligence previously to create efficiencies in day-to-day operations to nice success. The corporate has used AI to examine technology property and demanding infrastructure and has been exploring alternatives to use AI to the inspection of utility poles as effectively. Over the course of the AWS Machine Studying Options Lab engagement with Duke Power, the utility progressed its work to automate the detection of anomalies in wooden poles utilizing superior pc imaginative and prescient methods.

Targets and use case

The purpose of this engagement between Duke Power and the Machine Studying Options Lab is to leverage machine studying to examine a whole bunch of hundreds of high-resolution aerial photos to automate the identification and assessment technique of all wooden pole-related points throughout 33,000 miles of transmission traces. This purpose will additional assist Duke Power to enhance grid resiliency and adjust to authorities laws by figuring out the defects in a well timed method. It can additionally scale back gasoline and labor prices, in addition to scale back carbon emissions by minimizing pointless truck rolls. Lastly, it is going to additionally enhance security by minimizing miles pushed, poles climbed and bodily inspection dangers related to compromising terrain and climate situations.

Within the following sections, we current the important thing challenges related to creating sturdy and environment friendly fashions for anomaly detection associated to wooden utility poles. We additionally describe the important thing challenges and suppositions related to numerous knowledge preprocessing methods employed to attain the specified mannequin efficiency. Subsequent, we current the important thing metrics used for evaluating the mannequin efficiency together with the analysis of our closing fashions. And eventually, we evaluate numerous state-of-the-art supervised and unsupervised modeling methods.

Challenges

One of many key challenges related to coaching a mannequin for detecting anomalies utilizing aerial photos is the non-uniform picture sizes. The next determine reveals the distribution of picture top and width of a pattern knowledge set from Duke Power. It may be noticed that the pictures have a considerable amount of variation when it comes to measurement. Equally, the dimensions of photos additionally pose important challenges. The scale of enter photos are hundreds of pixels large and hundreds of pixels lengthy. That is additionally not preferrred for coaching a mannequin for identification of the small anomalous areas within the picture.

Distribution of image height and width for a sample data set

Distribution of picture top and width for a pattern knowledge set

Additionally, the enter photos include a considerable amount of irrelevant background data reminiscent of vegetation, vehicles, livestock, and many others. The background data might lead to suboptimal mannequin efficiency. Primarily based on our evaluation, solely 5% of the picture comprises the wooden poles and the anomalies are even smaller. This a serious problem for figuring out and localizing anomalies within the high-resolution photos. The variety of anomalies is considerably smaller, in comparison with the complete knowledge set. There are solely 0.12% of anomalous photos in the complete knowledge set (i.e., 1.2 anomalies out of 1000 photos). Lastly, there isn’t any labeled knowledge obtainable for coaching a supervised machine studying mannequin. Subsequent, we describe how we handle these challenges and clarify our proposed methodology.

Answer overview

Modeling methods

The next determine demonstrates our picture processing and anomaly detection pipeline. We first imported the information into Amazon Simple Storage Service (Amazon S3) utilizing Amazon SageMaker Studio. We additional employed numerous knowledge processing methods to handle among the challenges highlighted above to enhance the mannequin efficiency. After knowledge preprocessing, we employed Amazon Rekognition Custom Labels for knowledge labeling. The labeled knowledge is additional used to coach supervised ML fashions reminiscent of Imaginative and prescient Transformer, Amazon Lookout for Vision, and AutoGloun for anomaly detection.

Image processing and anomaly detection pipeline

Picture processing and anomaly detection pipeline

The next determine demonstrates the detailed overview of our proposed strategy that features the information processing pipeline and numerous ML algorithms employed for anomaly detection. First, we’ll describe the steps concerned within the knowledge processing pipeline. Subsequent, we’ll clarify the small print and instinct associated to numerous modeling methods employed throughout this engagement to attain the specified efficiency targets.

Knowledge preprocessing

The proposed knowledge preprocessing pipeline contains knowledge standardization, identification of area of curiosity (ROI), knowledge augmentation, knowledge segmentation, and lastly knowledge labeling. The aim of every step is described under:

Knowledge standardization

Step one in our knowledge processing pipeline contains knowledge standardization. On this step, every picture is cropped and divided into non overlapping patches of measurement 224 X 224 pixels. The purpose of this step is to generate patches of uniform sizes that may very well be additional utilized for coaching a ML mannequin and localizing the anomalies in excessive decision photos.

Identification of area of curiosity (ROI)

The enter knowledge consists of high-resolution photos containing great amount of irrelevant background data (i.e., vegetation, homes, vehicles, horses, cows, and many others.). Our purpose is to determine anomalies associated to wooden poles. In an effort to determine the ROI (i.e., patches containing the wooden pole), we employed Amazon Rekognition customized labeling. We educated an Amazon Rekognition customized label mannequin utilizing 3k labeled photos containing each ROI and background photos. The purpose of the mannequin is to do a binary classification between the ROI and background photos. The patches recognized as background data are discarded whereas the crops predicted as ROI are used within the subsequent step. The next determine demonstrates the pipeline that identifies the ROI. We generated a pattern of non-overlapping crops of 1,110 picket photos that generated 244,673 crops. We additional used these photos as enter to an Amazon Rekognition customized mannequin that recognized 11,356 crops as ROI. Lastly, we manually verified every of those 11,356 patches. In the course of the handbook inspection, we recognized the mannequin was capable of appropriately predict 10,969 wooden patches out of 11,356 as ROI. In different phrases, the mannequin achieved 96% precision.

Identification of region of interest

Identification of area of curiosity

Knowledge labeling

In the course of the handbook inspection of the pictures, we additionally labeled every picture with their related labels. The related labels of photos embrace wooden patch, non-wood patch, non-structure, non-wood patch and eventually wooden patches with anomalies. The next determine demonstrates the nomenclature of the pictures utilizing Amazon Rekognition customized labeling.

Knowledge augmentation

Given the restricted quantity of labeled knowledge that was obtainable for coaching, we augmented the coaching knowledge set by making horizontal flips of the entire patches. This had the efficient affect of doubling the dimensions of our knowledge set.

Segmentation

We labeled the objects in 600 photos (poles, wires, and metallic railing) utilizing the bounding field object detection labeling software in Amazon Rekognition Customized Labels and educated a mannequin to detect the three predominant objects of curiosity. We used the educated mannequin to take away the background from all the pictures, by figuring out and extracting the poles in every picture, whereas eradicating the all different objects in addition to the background. The ensuing dataset had fewer photos than the unique knowledge set, because of eradicating all photos that don’t include wooden poles. As well as, there was additionally a false optimistic picture that had been faraway from the dataset.

Anomaly detection

Subsequent, we use the preprocessed knowledge for coaching the machine studying mannequin for anomaly detection. We employed three completely different strategies for anomaly detection which incorporates AWS Managed Machine Studying Companies (Amazon Lookout for Imaginative and prescient [L4V], Amazon Rekognition), AutoGluon, and Imaginative and prescient Transformer based mostly self-distillation methodology.

AWS Companies

Amazon Lookout for Imaginative and prescient (L4V)

Amazon Lookout for Imaginative and prescient is a managed AWS service that permits swift coaching and deployment of ML fashions and gives anomaly detection capabilities. It requires absolutely labelled knowledge, which we offered by pointing to the picture paths in Amazon S3. Coaching the mannequin is as a easy as a single API (Software programming interface) name or console button click on and L4V takes care of mannequin choice and hyperparameter tuning underneath the hood.

Amazon Rekognition

Amazon Rekognition is a managed AI/ML service just like L4V, which hides modelling particulars and gives many capabilities reminiscent of picture classification, object detection, customized labelling, and extra. It gives the power to make use of the built-in fashions to use to beforehand identified entities in photos (e.g., from ImageNet or different giant open datasets). Nonetheless, we used Amazon Rekognition’s Customized Labels performance to coach the ROI detector, in addition to an anomaly detector on the precise photos that Duke Power has. We additionally used the Amazon Rekognition’s Customized Labels to coach a mannequin to place bounding containers round wooden poles in every picture.

AutoGloun

AutoGluon is an open-source machine studying method developed by Amazon. AutoGluon features a multi-modal element which permits simple coaching on picture knowledge. We used AutoGluon Multi-modal to coach fashions on the labelled picture patches to determine a baseline for figuring out anomalies.

Imaginative and prescient Transformer

Most of the most fun new AI breakthroughs have come from two latest improvements: self-supervised studying, which permits machines to be taught from random, unlabeled examples; and Transformers, which allow AI fashions to selectively deal with sure elements of their enter and thus cause extra successfully. Each strategies have been a sustained focus for the Machine studying group, and we’re happy to share that we used them on this engagement.

Particularly, working in collaboration with researchers at Duke Power, we used pre-trained self-distillation ViT (Imaginative and prescient Transformer) fashions as function extractors for the downstream anomaly detection software utilizing Amazon Sagemaker. The pre-trained self-distillation imaginative and prescient transformer fashions are educated on great amount of coaching knowledge saved on Amazon S3 in a self-supervised method utilizing Amazon SageMaker. We leverage the switch studying capabilities of ViT fashions pre-trained on giant scale datasets (e.g., ImageNet). This helped us obtain a recall of 83% on an analysis set utilizing only some hundreds of labeled photos for coaching.

Analysis metrics

The next determine reveals the important thing metrics used to judge mannequin efficiency and its impacts. The important thing purpose of the mannequin is to maximise anomaly detection (i.e. true positives) and decrease the variety of false negatives, or occasions when the anomalies that might result in outages are beingmisclassified.

As soon as the anomalies are recognized, technicians can handle them, stopping future outages and guaranteeing compliance with authorities laws. There’s one other profit to minimizing false positives: you keep away from the pointless effort of going via photos once more.

Maintaining these metrics in thoughts, we monitor the mannequin efficiency when it comes to following metrics, which encapsulates all 4 metrics outlined above.

Precision

The % of anomalies detected which might be precise anomalies for objects of curiosity. Precision measures how effectively our algorithm identifies solely anomalies. For this use case, excessive precision means low false alarms (i.e., the algorithm falsely identifies a woodpecker gap whereas there isn’t any within the picture).

Recall

The % of all anomalies which might be recovered for every object of curiosity. Recall measures how effectively we determine all anomalies. This set captures some proportion of the complete set of anomalies, and that proportion is the recall. For this use case, excessive recall implies that we’re good at catching woodpecker holes once they happen. Recall is due to this fact the best metric to deal with on this POC as a result of false alarms are at finest annoying whereas missed anomalies might result in critical consequence if left unattended.

Decrease recall can result in outages and authorities regulation violations. Whereas decrease precision results in wasted human effort. The first purpose of this engagement is to determine all of the anomalies to adjust to authorities regulation and keep away from any outage, therefore we prioritize bettering recall over precision.

Analysis and mannequin comparability

Within the following part, we exhibit the comparability of assorted modeling methods employed throughout this engagement. We evaluated the efficiency of two AWS companies Amazon Rekognition and Amazon Lookout for Imaginative and prescient. We additionally evaluated numerous modeling methods utilizing AutoGluon. Lastly, we evaluate the efficiency with state-of-the-art ViT based mostly self-distillation methodology.

The next determine reveals the mannequin enchancment for the AutoGluon utilizing completely different knowledge processing methods over the interval of this engagement. The important thing statement is as we enhance the information high quality and amount the efficiency of the mannequin when it comes to recall improved from under 30% to 78%.

Subsequent, we evaluate the efficiency of AutoGluon with AWS companies. We additionally employed numerous knowledge processing methods that helped enhance the efficiency. Nonetheless, the foremost enchancment got here from growing the information amount and high quality. We improve the dataset measurement from 11 Okay photos in complete to 60 Okay photos.

Subsequent, we evaluate the efficiency of AutoGluon and AWS companies with ViT based mostly methodology. The next determine demonstrates that ViT-based methodology, AutoGluon and AWS companies carried out on par when it comes to recall. One key statement is, past a sure level, improve in knowledge high quality and amount doesn’t assist improve the efficiency when it comes to recall. Nonetheless, we observe enhancements when it comes to precision.

Precision versus recall comparability

Subsequent, we current the confusion matrix for AutoGluon and Amazon Rekognition and ViT based mostly methodology utilizing our dataset that comprises 62 Okay samples. Out of 62K samples, 20 Okay samples are anomalous whereas remaining 42 Okay photos are regular. It may be noticed that ViT based mostly strategies captures largest variety of anomalies (16,600) adopted by Amazon Rekognition (16,000) and Amazon AutoGluon (15600). Equally, Amazon AutoGluon has least variety of false positives (3659 photos) adopted by Amazon Rekognition (5918) and ViT (15323). These outcomes demonstrates that Amazon Rekognition achieves the very best AUC (space underneath the curve).

Conclusion

On this put up, we confirmed you ways the MLSL and Duke Power groups labored collectively to develop a pc vision-based answer to automate anomaly detection in wooden poles utilizing excessive decision photos collected by way of helicopter flights. The proposed answer employed an information processing pipeline to crop the high-resolution picture for measurement standardization. The cropped photos are additional processed utilizing Amazon Rekognition Customized Labels to determine the area of curiosity (i.e., crops containing the patches with poles). Amazon Rekognition achieved 96% precision when it comes to appropriately figuring out the patches with poles. The ROI crops are additional used for anomaly detection utilizing ViT based mostly self-distillation mdoel AutoGluon and AWS companies for anomaly detection. We used a typical knowledge set to judge the efficiency of all three strategies. The ViT based mostly mannequin achieved 83% recall and 52% precision. AutoGluon achieved 78% recall and 81% precision. Lastly, Amazon Rekognition achieves 80% recall and 73% precision. The purpose of utilizing three completely different strategies is to match the efficiency of every methodology with completely different variety of coaching samples, coaching time, and deployment time. All these strategies take lower than 2 hours to coach a and deploy utilizing a single A100 GPU occasion or managed companies on Amazon AWS. Subsequent, steps for additional enchancment in mannequin efficiency embrace including extra coaching knowledge for bettering mannequin precision.

Total, the end-to-end pipeline proposed on this put up assist obtain important enhancements in anomaly detection whereas minimizing operations value, security incident, regulatory dangers, carbon emissions, and potential energy outages.

The answer developed could be employed for different anomaly detection and asset health-related use instances throughout transmission and distribution networks, together with defects in insulators and different tools. For additional help in creating and customizing this answer, please be at liberty to get in contact with the MLSL workforce.

In regards to the Authors

Travis Bronson is a Lead Synthetic Intelligence Specialist with 15 years of expertise in expertise and eight years particularly devoted to synthetic intelligence. Over his 5-year tenure at Duke Power, Travis has superior the appliance of AI for digital transformation by bringing distinctive insights and inventive thought management to his firm’s vanguard. Travis at present leads the AI Core Group, a group of AI practitioners, lovers, and enterprise companions centered on advancing AI outcomes and governance. Travis gained and refined his abilities in a number of technological fields, beginning within the US Navy and US Authorities, then transitioning to the non-public sector after greater than a decade of service.

 Brian Wilkerson is an achieved skilled with 20 years of expertise at Duke Power. With a level in pc science, he has spent the previous 7 years excelling within the discipline of Synthetic Intelligence. Brian is a co-founder of Duke Power’s MADlab (Machine Studying, AI and Deep studying workforce). Hecurrently holds the place of Director of Synthetic Intelligence & Transformation at Duke Power, the place he’s keen about delivering enterprise worth via the implementation of AI.

Ahsan Ali is an Utilized Scientist on the Amazon Generative AI Innovation Middle, the place he works with clients from completely different domains to resolve their pressing and costly issues utilizing Generative AI.

Tahin Syed is an Utilized Scientist with the Amazon Generative AI Innovation Middle, the place he works with clients to assist understand enterprise outcomes with generative AI options. Exterior of labor, he enjoys attempting new meals, touring, and educating taekwondo.

Dr. Nkechinyere N. Agu is an Utilized Scientist within the Generative AI Innovation Middle at AWS. Her experience is in Laptop Imaginative and prescient AI/ML strategies, Purposes of AI/ML to healthcare, in addition to the mixing of semantic applied sciences (Information Graphs) in ML options. She has a Masters and a Doctorate in Laptop Science.

Aldo Arizmendi is a Generative AI Strategist within the AWS Generative AI Innovation Middle based mostly out of Austin, Texas. Having obtained his B.S. in Laptop Engineering from the College of Nebraska-Lincoln, over the past 12 years, Mr. Arizmendi has helped a whole bunch of Fortune 500 firms and start-ups remodel their enterprise utilizing superior analytics, machine studying, and generative AI.

Stacey Jenks is a Principal Analytics Gross sales Specialist at AWS, with greater than 20 years of expertise in Analytics and AI/ML. Stacey is keen about diving deep on buyer initiatives and driving transformational, measurable enterprise outcomes with knowledge. She is very enthusiastic concerning the mark that utilities will make on society, by way of their path to a greener planet with inexpensive, dependable, clear vitality.

Mehdi Noor is an Utilized Science Supervisor at Generative Ai Innovation Middle. With a ardour for bridging expertise and innovation, he assists AWS clients in unlocking the potential of Generative AI, turning potential challenges into alternatives for fast experimentation and innovation by specializing in scalable, measurable, and impactful makes use of of superior AI applied sciences, and streamlining the trail to manufacturing.