Driving AI Revolution with Pre-built Analytic Modules
What is the Intelligence Revolution equivalent to the 1/4” bolt?
I asked this question in the blog “How History Can Prepare Us for Upcoming AI Revolution?” when trying to understand what history can teach us about technology-induced revolutions. One of the key capabilities of the Industrial and Information revolutions was the transition from labor-intensive, hand-crafted to mass manufactured solutions. In the Information Revolution, it was the creation of standardized database management systems, middleware and operating systems. For the Industrial Revolution, it was the creation of standardized parts – like the ¼” bolt – that could be used to assemble versus hand-craft solutions. So, what is the ¼” bolt equivalent for the AI Revolution? I think the answer is Analytic engines or modules!
Analytic Modules are pre-built engines – think Lego blocks – that can be assembled to create specific business and operational applications. These Analytics Modules would have the following characteristics:
- pre-defined data input definitions and data dictionary (so it knows what type of data it is ingesting, regardless of the origin of the source system).
- pre-defined data integration and transformation algorithms to cleanse, align and normalize the data.
- pre-defined data enrichment algorithms to create higher-order metrics (e.g., reach, frequency, recency, indices, scores) necessitated by the analytic model.
- algorithmic models (built using advanced analytics such as predictive analytics, machine learning or deep learning) that takes the transformed and enriched data, runs the algorithmic model and generates the desired outputs.
- layer of abstraction (maybe using Predictive Model Markup Language or PMML) above the Predictive Analytics, Machine Learning and Deep Learning frameworks that allows application developers to pick/use their preferred or company mandated standards.
- orchestration capability to “call” the most appropriate machine learning or deep learning framework based upon the type of problem being addressed. See Keras, which is a high-level neural networks API, written in Python and capable of running on top of popular machine learning frameworks such as TensorFlow, CNTK, or Theano.
- pre-defined outputs (API’s) that feeds the analytic results to the downstream operational systems (e.g., operational dashboards, manufacturing, procurement, marketing, sales, support, services, finance).
Analytic Modules produce pre-defined analytic results or outcomes, while providing a layer of abstract that enables the orchestration and optimization of the underlying machine learning and deep learning frameworks.
Monetizing IOT with Analytic Modules
The BCG Insights report titled “Winning in IoT: It’s All About the Business Processes” highlighted the top 10 IoT use cases that will drive IoT spending including predictive maintenance, self-optimized production, automated inventory management, fleet management and distributed generation and storage (see Figure 1).
Figure 1: Top 10 IoT Use Cases That Will Drive IoT Market Growth
But these IoT applications will be more than just reports and dashboards that monitor what is happening. They’ll be “intelligent” – learning with every interaction to predict what’s likely to happen and prescribe corrective action to prevent costly, undesirable and/or dangerous situations – and the foundation for an organization’s self-monitoring, self-diagnosing, self-correcting and self-learning IoT environment.
While this is a very attractive list of IoT applications to target, treating any of these use cases as a single application is a huge mistake. It’s like the return of the big bang IT projectsof ERP, MRP and CRM days, where tens of millions of dollars are spent in hopes that 2 to 3 years later, something of value materializes.
Instead, these IoT “intelligent” applications will be comprised of analytic modules integrated to address the key business and operational decisions that these IoT intelligent applications need to address. For example, think of Predictive maintenance as comprised of an assembly of analytic modules addressing the following predictive maintenance decisions including:
- identifying At-risk component failure prediction.
- optimizing resource scheduling and staffing.
- matching Technician and Inventory to the maintenance and repair work to be done.
- ensuring tools and repair equipment availability.
- ensuring First-time-fix optimization.
- optimizing Parts and MRO inventory.
- predicting Component fixability.
- optimizing the Logistics of parts, tools and technicians.
- leveraging Cohorts analysis to improve service and repair predictability.
- leveraging Event association analysis to determine how weather, economic and special events impact device and machine maintenance and repair needs.
As I covered in the blog “The Future Is Intelligent Apps,” the only way to create intelligent applications is to have a methodical approach that starts the predictive maintenance hypothesis development process with the identification, validation, valuing and prioritizing of the decisions (or use cases) that comprise these intelligent applications (see Figure 2).
Figure 2: Thinking Like A Data Scientist
As you take your business and operational stakeholders through the “Thinking Like A Data Scientist” process to uncover those decisions, it only makes sense to create Analytic Modulesthat address the specific advanced analytic and operational data requirements to support these operational decisions. Consequently, these analytic modules, if constructed using modern DevOps methodologies and capabilities, can be linked together like Lego pieces to create these intelligent IoT applications.
IOT Analytic Modules
One example of an IoT analytic modules would be Anomaly Detection. Anomaly detection is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset (see Figure 3).
Figure 3: Anomaly Detection Example
Anomaly detection occurs when a substantial change in normal behavior might indicate the presence of intended or unintended attacks, faults, defects and others. A number of different machine learning techniques can be used to help flag and assess the severity of detected anomalies including:
- k-Nearest Neighbor (k-NN): in pattern recognition, the k-nearest neighbors algorithm is a non-parametric method used for classification and regression.
- Neural Networks: a series of algorithms that identify underlying relationships in the data by using layers of interconnected nodes. Neural networks have the ability to adapt to changing input so the model produces the best possible result without the need to redesign the output criteria.
- Decision Trees: a decision support tool that uses a tree-like graph to model decisions and their possible consequences, including chance event outcomes, resource costs, and utility.
- Support Vector Machine: in machine learning, support vector machines are supervised learning models that analyze data used for classification and regression analysis.
- Self-organizing map: a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a two-dimensional map of the training data to aid in dimensionality reduction.
- k-means clustering: k-means clustering partitions n observations into k clusters in which each observation belongs to the cluster with the nearest mean.
- Fuzzy C- means: fuzzy clusteringis a form of clustering in which each data point can belong to more than one cluster.
- Expectation-Maximization Meta: an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models.
One technique in particular that is to be gaining traction for anomaly detection is adaptive resonance theory (ART). ART is a technique that has been used extensively to detect network intrusions. In managing networks, anomalies are often malicious intrusion attempts that represent a serious threat to network security.
Real-world Anomaly Detection Case Study
Power generating facilities and industrial plants need to maximize operational efficiency by optimizing operating conditions based on fuel and raw material lot changes, aging deterioration of equipment, and so on. Hitachi has developed high-efficiency operational support technologies leveraging advanced anomaly detection in a joint effort with the Universiti Teknologi PETRONAS for industrial plants that detect equipment and operating anomalies.
With conventional anomaly diagnosis technologies that are based on initial conditions, as fluctuations within normal operating ranges are also determined to be “anomalous,” the application of these technologies in places where the conditions considered appropriate can change on a daily basis has been difficult.
However, Hitachi’s newly developed technology employs a sequential learning-type data classification technology known as adaptive resonance theory (ART). Since ART can teach a system the “normal” conditions that correspond to a wide range of operating states, anomalies can be detected accurately (see Figure 4).
Figure 4: Anomaly Detection Using Adaptive Resonance Theory (ART)
A pilot plant for distillation towers, a key piece of equipment used in crude oil refining plants, was used to verify the system. Even when the composition of the raw materials changed, it was demonstrated that anomalies such as malfunctioning flow adjustment valves and sensor drift could be detected.
For more details on this Hitachi case study, please check out the “High-efficiency Operational Support Technologies for Industrial Plants” paper from Hitachi’s Research and Development Group.
Analytic Modules are one way to not only simplify the development of intelligent IoT applications, but also provide a way to monetize one’s analytics capabilities by reusing the same modules across a multitude of IoT use cases or applications. For example, an Anomaly Detection module could be used across a number of different IoT use cases or applications as depicted in Figure 5.
Figure 5: Monetizing Anomaly Detection Analytic Module across several IOT use cases
Any improvements in the effectiveness of that particular analytic module immediately drives economic value to all the other use cases that analytic module supports. When this happens, not only are organizations deriving and driving the economic value of their IoT data, they are deriving and driving the economic value of their IoT analytics.
As we found in our University of San Francisco research project on the economic value of data, we are only now beginning to understand how to monetize our digital assets through re-use, or as Adam Smith would say, “value in use” versus “value in exchange.”
Machine Learning Techniques for Anomaly Detection: An Overview https://pdfs.semanticscholar.org/0278/bbaf1db5df036f02393679d485260…
Anomaly detection using adaptive resonance theory
PMML (Predictive Model Markup Language) is an XML-based language that enables the definition and sharing of predictive models between applications. A predictive model is a statistical model that is designed to predict the likelihood of target occurrences given established variables or factors.
Discover Past Posts