Practical Process Control Part 22: Monitoring

Article by Myke King CEng FIChemE

In the last article we looked at the assessment of a potential inferential. Here we cover techniques for monitoring its performance and automatically updating it to maintain its accuracy

Quick read

  • Inferential Accuracy and Bias Correction: Inferentials should be validated using quality measurements, but discrepancies arise due to time-stamping inaccuracies and laboratory errors. A cumulative sum of errors (CUSUM) approach helps distinguish bias from random errors, improving inferential accuracy
  • Dynamic Compensation for Analyser Updates: On-stream analysers and inferentials have different response times. Dynamic compensation techniques, such as deadtime and lead-lag algorithms, align their measurements to improve inferential performance and update reliability
  • Handling Complex Process Dynamics: Inferentials must be designed with dynamic behavior in mind. In cases where inputs respond at different rates to disturbances, reducing the number of variables can simplify dynamics and improve control system performance

WHILE we may have installed an effective inferential, this will not usually replace the existing quality measurement – whether it be laboratory sampling or an on-stream analyser. So, we can use these measurements to check the accuracy of the inferential and potentially correct it automatically.

First, let us consider the use of a laboratory result for the property Q. The issue is that the result is reported some considerable time after the sample is taken. To validate the inferential, we need to know its value at the time of sampling. While sample time is recorded in many LIMS (laboratory information management systems), it is often the scheduled rather than the actual time. Processes are rarely perfectly at steady state and so comparison between laboratory and inferential becomes unreliable. This, of course, is an issue when developing an inferential. However, the difference is that in using a large number of records, such time-stamping errors are averaged close to zero.

While they will cause a reduction in R2 they will have less effect on accuracy. The problem arises if today’s result is very different from the installed inferential. The solution, of course is reliable time-stamping. This is commonplace in highly regulated processes, such as pharmaceuticals. Sampling points include a limit switch which is activated when a sample is taken. The time is logged automatically and a scannable sample label printed – which also helps avoid sample mix-ups.

Traditionally, a bias term in the inferential is updated to force it to agree with the latest laboratory result. However, we recognise that the laboratory is also prone to error and so we take a cautious approach. We introduce the parameter K, typically set to around 0.35, so that the correction ramps in over several samples:

Fortunately, much of the industry now recognises this approach is flawed. Any random error in Qlaboratory will be added to the bias and so appear in the corrected inferential. The variance of the error will therefore be increased by the factor (1 + K2), so reducing φ – recalling this is defined as:

Clearly unsuitable for random error, this technique is important in dealing with bias errors. Such might occur if there is a change to the process – for example, in feed composition or catalyst activity. So how do we separate bias error from random error?

Table 1 records 20 consecutive laboratory results alongside the value of the inferential at sample time. The final column is the cumulative sum of errors (CUSUM). This is plotted as Figure 1. If the error were random then the CUSUM trend would be a noisy horizontal line. The slope of the trend is the bias error. In this example, this was determined using the last six records and applied as a correction to the inferential’s bias term. While waiting for six results might seem an excessive delay in applying an update, the method it replaces (with K set to 0.35) would have implemented only 92% of the correction.

Table 1: Laboratory versus inferential
Figure 1: Use of CUSUM

And it’s likely that we could make the correction after fewer samples.

Without going into the proof, for what might seem an obvious result, if we use the slope of the CUSUM for the last three samples:

Less obviously, using the last four samples:

Inclusion of the term K may be unnecessary – certainly its value can now approach 1. In fact, its value is better optimised as part of the regression analysis. Note that, in determining future corrections, the bias correction also has to be applied retrospectively to those predictions which will be used to determine the next correction.

On-stream analyser

If installed, we can also use an on-stream analyser to update the inferential. While the inferential and the analyser might agree at steady state, they will not do so during a disturbance. This is because the dynamic response of the inferential will be faster than that of the analyser (otherwise the inferential has little purpose!). We could wait until steady state is reached before updating but a better approach is to apply dynamic compensation. We covered the technique as part of our article on feedforward control (see TCE 999). We step-test to obtain the dynamics of both analyser and inferential. As shown in Figure 2, using a deadtime and lead-lag algorithm delays the inferential measurement so that it has the same dynamics as the analyser. Figure 3 shows the configuration. It includes a filter parameter (P). For continuous analysers this can be set close to 1. Discontinuous analysers produce a staircase trend, so a value of 0.7 or less is advisable to prevent this adversely affecting the inferential. Or a better approach is to use the analyser read-now contact to trigger an update.

Figure 2: Dynamic compensation
Figure 3: Analyser updating

While we must take account of analyser dynamics when monitoring and updating an inferential, they also influence the precision to which we can develop an inferential. One approach is to be sure that the data are collected at steady state. This may limit the number of records available and might miss those occasions where the process is away from target – so reducing data scatter. There is, however, a simple way of including dynamics in the regression analysis. Considering, first, a single-input inferential, we regress the equation:

This equation is effectively the same as the FOPDT (first order plus deadtime) model that we developed in TCE 981. To apply it, imagine we have a spreadsheet with the dependent variable y in the first column and the independent variable x in the second. We first insert a column between the two and copy into it the values of y – displaced downwards one row. The column will now contain yn-1. We’ll show later that this helps us identify the lag (τ). To obtain the deadtime (θ), we similarly copy x into the next column, displaced by one row, to give xn-1. We repeat this for several columns to include xn-2, xn-3 etc – adding enough columns to cover the likely deadtime. We then delete any incomplete rows from the beginning and end of the spreadsheet.

We use regression to identify the best three-input inferential. If there is a clear dynamic model, the best inputs will be those in the equation above. In addition to yn-1, it should include values of x that are one data collection interval (ts) apart. These allow for θ not being an exact multiple of ts (if a3 is close to 0, then it is). The values of θ and τ are not required for the inferential itself but will be of use when we later install analyser updating. They are calculated from:

While we include dynamic compensation when developing the inferential, we do not in the installed inferential. We want it to give the earliest possible indication of any change, not have the same dynamics as the analyser. The implemented inferential would therefore be:

This technique can equally be applied when developing an inferential that includes more than one independent variable – but only if the dynamics are similar between the analyser and each input. If they are not, then consideration should be given to reducing the number of inputs.

Difficult dynamics

Figure 4 shows a near-perfect inferential, in that it accurately predicts the property at steady state. But its dynamic behaviour would give considerable controller tuning problems. The inverse response is the result of x3 changing sometime after the other inputs. For example, a disturbance at the top of a distillation column would more quickly affect tray temperatures nearer the top than one nearer the bottom. In theory it would be possible to lag the other inputs to match those of x3 but is unlikely to be practical. Figure 5 shows the same inferential responding to a different disturbance. This might be at the bottom of our column and so affect x3 first. The inferential now shows very different dynamics. Since we don’t know the source of the disturbance, or if there are several occurring simultaneously, dynamic compensation is not practical. In this case, we would sacrifice some accuracy by omitting x3 from the inferential. As the figures show, this now shows much simpler dynamic behaviour.

(Top to bottom) Figure 4: Inferential input dynamics (case 1); Figure 5: Inferential input dynamics (case 2)

Next issue

In the next issue we’ll present a number of examples of inferential development, aimed at illustrating some of the key issues.

The topics featured in this series are covered in greater detail in Myke King's book, Process Control – A Practical Approach, published by Wiley in 2016.


This is the twenty-second in a series that provides practical process control advice on how to bolster your processes. To read more, visit the series hub at https://www.thechemicalengineer.com/tags/practical-process-control/


Disclaimer: This article is provided for guidance alone. Expert engineering advice should be sought before application.

Article by Myke King CEng FIChemE

Director of Whitehouse Consulting, an independent advisor covering all aspects of process control

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.