BSI PD IEC/TR 62010:2016
$215.11
Analyser systems. Maintenance management
Published By | Publication Date | Number of Pages |
BSI | 2016 | 74 |
1.1 Purpose
This document is written with the intention of providing an understanding of analyser maintenance principles and approaches. It is designed as a reference source for individuals closely involved with maintenance of analytical instrumentation, and provides guidance on performance target setting, strategies to improve reliability, methods to measure effective performance, and the organisations, resources and systems that need to be in place to allow this to occur.
Effective management of on-line analysers is only possible when key criteria have been identified and tools for measuring these criteria established.
On-line analysers are used in industry for the following reasons:
-
Safety and environmental. One category of on-line analyser is those used to control and monitor safety and environmental systems. The key measured parameter for this category of analyser is on-line time. This is essentially simpler to measure than an analyserās contribution to profits but as with process analysers applied for profit maximisation, the contribution will be dependent upon ability to perform its functional requirements on demand.
-
Asset protection and profit maximisation. On-line analysers falling into this category are normally those impacting directly on process control. They can impact directly on protection of assets (e.g. corrosion, catalyst contamination) or product quality, or can be used to optimise the operation of the process (e.g. energy efficiency). For this category of analysers, the key measured parameter is either the cost of damage to plant or the direct effect on overall profit of the process unit. Justification as to whether an analyser is installed on the process can be sought by quantifying the payback time of the analyser, the pass/fail target typically being 18 months. The contribution of the analyser to reduction in extent of damage to, or the profit of, the process unit, is difficult to measure. However, this contribution will be dependent upon the analyserās ability to perform its functional requirements upon demand.
This document focuses on the cost/benefits associated with traditional analyser maintenance organisations. Due to the complexity of modern analysers, support can be required from laboratory or product quality specialists, for example for chemometric models, who can work for other parts of the organisation. Inclusion of their costs in the overall maintenance cost is therefore important.
1.2 Questions to be addressed
When considering on-line analyser systems and their maintenance, the following key points list is useful in helping decide where gaps exist in the maintenance strategy.
-
What is the uptime of each critical analyser? Do you measure uptime and maintain records? Do you know the value provided by each analyser and therefore which ones are critical? Do you meet regularly with operations (āthe customerā) to review priorities?
-
What is the value delivered by each analyser in terms of process performance improvement (i.e. improved yield values, improved quality, improved manufacturing cycle time and/or process cycle time, process safety (e.g. interlocks), environmental importance)? Is this information readily available and agreed to in meetings with operations? Is the value updated periodically?
-
What is the utilisation of each critical analyser? That is, if the analyser is used in a control loop, what percentage of the time is the loop on manual due to questions about the analyser data? Do you keep records on the amount of time that analyser loops are in automatic? Do you meet regularly with operations to review the operatorās views about the plausibility of the analyser data?
-
Do you have a regular preventive maintenance programme set up for each analyser which includes regular calibrations? Does the calibration/validation procedure include statistical process control (SPC) concepts ā upper/lower limits and measurement of analyser variability (or noise)? Is the procedure well documented? Do you conduct it regularly, even when things are running well?
-
Do you have trained personnel (capable of performing all required procedures and troubleshooting the majority of analyser problems) who are assigned responsibility for the analysers? Do the trained personnel understand the process? Do they understand any lab measurements which relate to the analyser results?
-
Do the trained maintenance personnel have access to higher level technical support as necessary for difficult analyser and/or process problems? Do they have ready access to the individual who developed the application? Do they have ready access to the vendor? Can higher level support personnel connect remotely to the analyser to observe and troubleshoot?
-
Do you have a maintenance record keeping systems, which documents all activity involving the analysers, including all calibration/validation records, all repairs and/or adjustments?
-
Do you use the record keeping system to identify repetitive failure modes and to determine the root cause of failures? Do you track the average time-to-repair analyser problems? Do you track average time-between-failures for each analyser?
-
Do you periodically review the analysers with higher level technical resources to identify opportunities to significantly improve performance by upgrading the analyser system with improved technology or a simpler/more reliable approach?
-
Do you meet regularly with operations personnel to review analyser performance, update priorities, and understand production goals?
-
Do you have a management framework that understands the value of the analysers and are committed to and supportive of reliable analysers?
-
Do you know how much the maintenance programme costs each year and is there a solid justification for it?
Consideration of the above questions will help to identify opportunities for continuously improving the reliability of installed process analysers. Once the opportunities are identified the following clauses are intended to give guidance in achieving the solutions with the aim of:
-
maximising performance and benefit of installed analysers;
-
achieving full operator confidence in the use of on-line analysers;
-
analyser output data becoming reliable enough to be used by operators, control systems, and other users, in order to improve plant operation versus world class manufacturing metrics to become the best process analysers possible.
PDF Catalog
PDF Pages | PDF Title |
---|---|
4 | CONTENTS |
7 | FOREWORD |
9 | INTRODUCTION Figures Figure 1 ā Flow path detailing interrelationships of subject matter in IEC TR 62010 |
11 | 1 Scope 1.1 Purpose 1.2 Questions to be addressed |
12 | 2 Normative references 3 Terms and definitions |
17 | 4 Classifying analysers using a risk based approach 4.1 General |
18 | Figure 2 ā Generalized risk graph |
19 | 4.2 Safety protection 4.3 Environmental protection Tables Table 1 ā Typical application of elements in the risk graph |
21 | 4.4 Asset protection 4.5 Profit maximisation |
22 | 4.6 Performance target Table 2 ā Best practice availability targets |
23 | 4.7 Maintenance priority 4.8 Support priority 5 Maintenance strategies 5.1 General 5.2 Reliability centred maintenance (RCM) 5.2.1 General |
24 | 5.2.2 Reactive maintenance 5.2.3 Preventative or planned maintenance (PM) |
25 | 5.2.4 Condition based strategy 5.2.5 Proactive maintenance 5.2.6 Optimising maintenance strategy |
26 | 5.3 Management systems/organisation Figure 3 ā Failure mode pattern |
27 | Figure 4 ā Organisation of analyser functions |
28 | 5.4 Training/competency 5.4.1 General 5.4.2 Training needs 5.4.3 Selecting trainees 5.4.4 Types of training |
29 | 5.4.5 Vendor training 5.4.6 Classroom training 5.4.7 Technical societies 5.4.8 User training |
30 | 5.4.9 Retraining 5.5 Optimal resourcing 5.5.1 General |
31 | 5.5.2 Equivalent analyser per technician (EQAT) calculation method 5.5.3 Ideal number of technicians |
32 | 5.5.4 In-house or contracted out maintenance Figure 5 ā Relative maintenance costs |
33 | 5.5.5 Off-site technical support requirement 5.6 Best practice benchmarking 5.7 Annual analyser key performance indicator (KPI) review |
34 | 6 Analyser performance monitoring 6.1 General Table 3 ā Example agenda for a KPI review meeting |
35 | 6.2 Recording failures ā reason/history codes 6.2.1 General 6.2.2 Typical failure pattern |
36 | Figure 6 ā Life cycle diagram |
37 | 6.3 SPC/proof checking 6.3.1 Analyser control charting Figure 7 ā Reliability centred maintenance failure patterns |
38 | Figure 8 ā Control charting diagram |
39 | 6.3.2 Control chart uncertainty limits Figure 9 ā Examples of analyser results |
40 | 6.4 Analyser performance indicators 6.4.1 Key performance indicators (KPI) |
41 | 6.4.2 Additional analyser performance indicators |
42 | 6.4.3 Points to consider in measurement of analyser availability |
43 | Figure 10 ā Example of control charting with linear interpretation |
44 | 6.4.4 Points to consider in measurement of operator utilisation |
45 | 6.4.5 Points to consider in measurement of analyser benefit value 6.4.6 Deriving availability, utilisation and benefit measurement Figure 11 ā Deriving availability, utilisation and benefit measurement |
46 | 6.4.7 Optimising analyser performance targets |
50 | 6.4.8 Analyser maintenance cost against benefit 6.5 Analyser performance reporting |
52 | AnnexĀ A (informative)Equivalent analyser per technician (EQAT) A.1 Part 1 ā Calculated technician number worksheet A.2 Part 2 ā Equivalent analyser inventory worksheet calculation methodology |
54 | A.3 Part 3 ā Equivalent analyser inventory worksheet |
59 | AnnexĀ B (informative)Example interpretation of control chart readings Figure B.1 ā Example of accurately distributed control chart reading Figure B.2 ā Example of biased control chart reading |
60 | Figure B.3 ā Example of drifting control chart reading Figure B.4 ā Example of control chart reading, value outside warning limit |
61 | AnnexĀ C (informative)Determination of control chart limits by measuring standard deviations of differences Table C.1 ā Example distillation analyser data for determining control chart limits |
62 | Figure C.1 ā Example determination of control chart limits by measuring standard deviations |
63 | AnnexĀ D (informative)Adopting a maintenance strategy Figure D.1 ā Determining appropriate maintenance strategy |
64 | AnnexĀ E (informative)Examples of analyser cost against benefit and analyser performance monitoring reports Table E.1 ā Analyser costs versus benefits (1 of 2) |
66 | Table E.2 ā Analyser technician resources Table E.3 ā Technician skill and experience data Table E.4 ā Variation of availability with manning levels and overtime |
67 | Table E.5 ā Sitewide average analyser data |
68 | Figure E.1 ā Achievable availability against manning Figure E.2 ā Achievable benefit against manning |
69 | AnnexĀ F (informative)Typical reports for analyser performance monitoring Figure F.1 ā Uptime in Plant “A” |
70 | Table F.1 ā Results of analyser performance in Plant “A” |
71 | Bibliography |