For prediction, we mean to foretell the occurring of an event in advance according to hypotheses based on mathematical calculi.
Within the word “prediction”, are stored all the essence and the desire that drive monitoring. Today, we can’t be happy with the mere knowledge of what’s happening to a thing in real-time. We need to predict the future on the basis of past events.
Monitoring means the continuous observation of a thing, physical size, or of the descriptive mathematics of a thing. The thing constitutes the asset (tangible or intangible) that must be monitored constantly in time.
Time is the fundamental measure the governs monitoring. It’s obvious that in our reality the “spread of time” is continuous and can be discerned. In other words, we can claim that in the real world the concept of atomicity is in force, meaning that if we take a time interval, we can then segment it into infinite instants. In monitoring, instead, we cannot “continuously” grasp (or acquire) a physical measure. In mathematical terms, there is no such frequency of infinite data acquisition. That’s why we have to settle for moderate data acquisitions scanned at a certain frequency.
The frequency of acquisition is responsible for the subdivision of monitoring in two macro areas:
“High” and “low” frequencies are quantified according to their application. For instance, in infrastructure monitoring, the boundary between dynamic and static can be drawn around 10-20 Hz.
All the measures subject to monitoring are characterized by a progression in time whose variations can depend on other variants which, in turn, may or may not be subject to monitoring. According to this, monitoring makes up for an incredible tool to identify correlations among physical measures in a monitoring system.
In any case, however we may employ it, whoever works with monitoring tools must know how to use a set of Time Series that characterize any acquired physical measure.
In ogni caso, a prescindere dall’uso che se ne fa, chiunque opera nel monitoraggio deve destreggiarsi con una serie di Time Series caratteristiche di ogni grandezza fisica acquisita.
Prediction, in monitoring, makes up the meeting point between civil-environmental engineering and data science.
All the civil works built in the modern-contemporary age are designed according to the laws of the Science of Construction and Statics which dictate the rules on the dimensions of beams and load-bearing structural elements. Moreover, they take into account how the work interacts with itself and the surrounding environment.
thus, civil-environmental engineering provides us the main engineering properties describing the nominal conditions of a project - that is the final, expected numerical values. In other words, the nominal value of engineering properties constitutes the values at the work’s beginning and then changes during the normal life of the work.
The observation of variation in time of these engineering values is the goal of structural monitoring.
Actually, in the design phase, we already carry out probabilistic predictions. Many parameters and coefficients coming into play at this stage are calculated on a statistical basis and according to historical data of other structures or to lab tests.
Moreover, predictions inform us that the future behavior of the work is calculated on past, characteristic data of the same work. Indeed, however similar the behavior of two structures designed in the same fashion can be, it can never be identical. The power of prediction in monitoring dwells precisely in knowing the history of the piece of infrastructure, how it “breathes” in the succession of day and night and how it reacts to external stimuli. Once we know the history, then we can determine the trends that inform us about the properties we’re examining, we can set thresholds for the predictive maintenance, above which we must intervene.
To set thresholds we must have a thorough and reliable knowledge of the piece of infrastructure's history. thresholds can be static, adaptive, and dynamic. They are calculated with historical data and selected on the basis of the type of monitoring to carry out. Anyway, predictive maintenance takes shape from the continuous comparison between different kinds of thresholds with the considered engineering properties.
The meeting point between civil-environmental engineering and data science calls for Machine Learning too. This is a helpful tool when it comes to monitoring and prediction since we can identify instances of the system based on classification and clustering by means of correlation algorithms.
A typical example is the application of classification algorithms on the correlation between two or more characteristics of the system, from which stem sub-dominions of existence. The constant monitoring of correlation provides us with further information on the past and current behavior of the structure. Thus, we can determine future trends and potential anomalies.
In the future, we will see an increasing need for monitoring systems. The fields of application are the widest and involve the use of several technologies and researches. We go from sensors (electric, optic fiber) to civil engineering, going through data science, computer science, and statistics. This dictates a multidisciplinary approach involving different profiles that have to communicate effectively with each other.
Today, we still don’t have a widespread culture of multidisciplinary monitoring - meaning that it is still a too-much underestimated subject to justify investments in the necessary resources. With these conditions, monitoring is left to not enough automated tools causing a lack of data and, consequently, a scarce knowledge on the life of the work during its normal operations. This doesn’t help the subject of monitoring to thrive.
Larger interest only comes from catastrophic events such as bridges and flyovers falling down which pushed the investments in new technologies to monitor pieces of infrastructure 24h and in real time. Moreover, the majority of roads and railways were built in the ‘50s and in the '60s. Hence, we’re almost at the end of the useful life of these works and are already showing signs of failure. These, indeed, are encouraging the installation of monitoring systems, although they are not supported by the necessary consideration they deserve.
The subject’s multidisciplinarity needs us to keep up with the times. Innovative technologies can become obsolete in few years. For instance, in a few years, Machine Learning has become a widely renowned subject and might even become obsolete in a while. That’s why we need to start thinking about applications of Deep Learning or non-supervised algorithms.
Tell me your opinion on the topic or what should I write next! If you missed it, also read my first post in the Tech Coffee Break column, dedicated to edge technologies!
THE AUTHOR: ANDREA CANFORA
Senior Data Scientist at Sensoworks. His cross-field engineering skills allow him to be the joining link between the teams that design Sensoworks’ solutions and the technical team that realizes them.