Breakout session
Early Fault Detection
and
Predictive Methods
What is Predictive Maintenance? A
Survey
'Watching systems carefully, exchanging cheap parts before they
break and track faults'
'A huge relief'
'Prevents a 3am call out’
We Need a Definition/Clarification
Many definitions of predictive and preventative methods and maintenance exist but it was
not possible in this session to define one specifically for our applications to which everyone
agreed and that could span our range of applications (at least not with a mature level of
detail, content and context)!
With much discussion, a definition evolved for predictive methods:
Predictive methods uses software and hardware tools and means with human
inspection to monitor equipment, systems, conditions and practise…extracting data
on health and other criteria to observe trends or uncharacteristic anomalies…using
risk analysis to decide on the level of intervention required to prevent a failure in
the most efficient and effective way.
Examples of Predictive Tools and Methods
• Condition monitoring using diagnostic analysis tools within MatLab/LabView or other proprietary systems (ISIS)
• Real-time radiation & temperature monitoring for PC electronics in the tunnels, to predict issues arising (CERN)
• Real-time BLM data as a precursor to fault analysis (CERN)
• Using machine learning to predict the next fault (neural networks)
• Human experience versus data analyses (cost vs benefit)
• Beam data as a real-time and historical analysis tool
We Need Best Practises Tools, Methods, Do’s and Don'ts
• Operator…experience over data analysis. Operators involved in analysis and scoping of the specific analysis
• Beam…strong indication in predictive fault analysis, check also systems involved in beam production
• Real or Fake…Is it a real problem or instrumentation/diagnostics, is it a 1 time event
• Sensing…are sufficient sensors installed and taking data with reasonable quality and content
• Data… overload, weighting, diagnostic versus critical functions and its separation to avoid un-
testability, etc.
• Robust…data, comparability of data
• Sanity checks…regular (cabling/connection integrity, ID matching, MPS signals)
• Collaboration…Analysts and systems experts need work together closely
Conclusion
• Terms of reference. Much discussion and clarification needed from many silos in
our industry to agree on the meaning of terms
• Many tools and methods exist to support predictive methods and their
applications however cost benefit needs analysing and collaboration with
industry and academia to find new and innovative ideas
• Such sessions may best be delivered by experts in the field to trigger an even
more fruitful discussion
Thankyou to the session contributors
The audience was fantastic!
Thankyou for all your honest and open minded input!
Very constructive with many contributions!
Hopefully we will be able to learn even more during the next ARW!