Willkommen zur German Website
Wir haben festgestellt, dass Sie möglicherweise die United Kingdom Seite bevorzugen. Bitte verwenden Sie die obige Auswahl, um Ihre Sprache bei Bedarf zu ändern.
Executive summary
As one of the most complex industrial processes around, glass manufacturing could benefit greatly from data analytics, however, the industry is known to be conservative, traditional and risk averse. Some high-end glass manufacturers are already investing in it, but for commodity glass manufacturers with limited finances, justification can be a challenge. This paper discusses some pragmatic cost effective approaches that can simplify data acquisition by utilizing data already available in the existing system.
Industry 4.0 is a name for the current trend of automation and data exchange in manufacturing technologies. It includes cyber-physical systems, the Internet of things, cloud computing and cognitive computing1.
It is no secret that the Glass Industry is very traditional, conservative and risk averse, but perhaps this is due to the fact that glass manufacturing is one of the most complex physical and chemical industrial processes around. It could profit greatly from data analytics because of this, but in most branches of the industry where margins are low, and products are considered to be commodity and investments rather than directly manufactured equipment, the related costs are hard to justify. To be fair, process control suppliers haven’t introduced big innovations either and if we look at systems supplied 30 years ago, the difference is not that great. We all recognize that this has to change soon, simply because the industry faces huge challenges, for example, competition against plastics, introduction of new glass materials and remaining attractive to a young workforce, as well as energy related challenges such as reducing carbon emissions, the ability to cope with the energy market, finding the skill required to manage fossil fuel composition fluctuations and eventually converting to all-electric “emission free” manufacturing. A change of attitude is required but “change” needs to be justified, and data is what is needed to provide that justification. Some high-end glass manufactures are already investing a huge amount of money and resources into data analytics because the complexity of their process can no longer do without it. Others will need to follow, and we, the suppliers of this technology, have an obligation to keep it pragmatic and at an acceptable price level to enable glass manufacturing to remain competitive.
Eurotherm™ by Schneider Electric™ undoubtedly has a very strong opinion about the common sense of data, analytics and so called ‘Industry 4.0’. We have published articles about these subjects before in Glass Worldwide magazine and of course we fully agree that the glass industry will profit from the introduction of both data management and data analytics in order to improve manufacturing processes. Personally I have been involved in data capture and data analytics for at least 15 years; from around the time SCADA systems were first able to store process data digitally. While utilizing long term furnace and forehearth data to understand glass processes and improve their control strategies, we also found simplified algorithms that were able to predict NOx and job-change behavior during the early days of model based predictive control. This was in reaction to the introduction of CO2 and NOx trading in the Netherlands, which forced the company I worked for to account for and justify our emissions to the government. In order to simplify administration we introduced industrial databases that could sufficiently manage data storage fast enough and in the required quantity. So, companies that had to set up emission trading systems became the first to introduce industrial databases and analytics only because of the need to manage their CO2 and NOx administration, else they would probably never have considered such a technology and specifically not at that time. However, many companies are still using different types of data sources from process control databases via spreadsheets, text documents and hand written reports. All these different reports serve a specific purpose but are very hard to combine in an open data source for process, manufacturing, maintenance, quality and energy efficiency improvement purposes.
Another obstacle is that when a company puts a factory together they typically involve multiple specialized equipment suppliers, who introduce their own preferred automation system with its own data collection, storage and data naming system. The result is that data exists but it is difficult to combine in an open source for analytics. Sometimes these data bases and their data are not even time-synchronized, so the combination of batch, sequential and continuous process data can be very confusing.
Individual parts of the plant should no longer be considered a stand-alone environment. A common sense solution to this problem is to use a single common industrial data management system per site to store and manage all process related data. This is not necessarily an easy thing to manage and in most facilities is impossible to change. To find process correlations, data needs to be consistent even if different suppliers provide specific parts of a plant. To deal with this, consider installing an overall database system such as a WonderWare™ industrial database. Fortunately most data systems have interfacing capabilities that enable them to be linked to this kind of system, allowing sensible name tagging, and providing time synchronized, open data.
Another problem to overcome is the limitation of traditional tag name conventions, which are typically related to different equipment suppliers in parts of the process, as well as early versions of process control systems which were focused on the original field equipment, process control hardware, maintenance and drawing standards. Developing a new and open TAG-name convention is not an easy task, as it would now need to serve different requirements and a variety of data users.
We have found that if the naming convention is too complex and difficult to remember, it represents a big hurdle for those who want to perform their own analytics or put specific reports together. Therefore, to help process knowledge experts to find data based improvements, the naming convention needs to be simple, easy to use, cover the whole process and be standardized in some way. This is not a simple set of requirements and we have already spent a lot of time developing bespoke solutions. One example showed us that if a naming convention is understandable for those who have an interest in data, the results of their analytics are amazing. However, this subject still needs more discussion, investigation and development in order to find a “one size fits all” approach.
There are some other concerns to consider. If for example, we compare raw material weight measurement data coming from the batch house, with furnace pressure values and visual inspection machine images, it becomes obvious that the system needs to be able to manage different data formats, occurring at different times with different resolutions. Nevertheless, if we want to be able to analyze glass defects and investigate if they correlate with batch impurities or the melting process, we need to be able to capture all these types of process data.
Another dilemma is that we don’t know upfront where specific correlations are occurring in our process. The first exercise is to use the stored data to find out. Therefore it makes sense to store it all and spend sufficient time understanding the dynamics of the process, so data can be collected at an adequate resolution. It is obvious that furnace glass temperature behaves differently to furnace pressure measurement, and set points only need to be stored after there have been changes. Next to that, we should not forget or at least consider storing specific events which are not captured by our systems, such as weather conditions, opening or closing of ventilation ports, etc. One example we came across was a manufacturer who discovered that a huge furnace data set was almost useless because it didn’t contain the furnace pull rate, as they painfully found out afterwards. Getting the data right and stored correctly is essential to get decent results from data analytics.
Once a good database foundation with sufficient data history is available, all interested personnel should be able to start doing some analysis. Initially it makes sense to find some proof of concepts by investigating if obvious correlations can be found in the dataset. These exercises will hopefully strengthen confidence in the data and once that has been done, more complex analysis can be considered.
In my experience of performing data analytics on glass processes, even if the data is consistent and analytic tools are extremely smart, it is still difficult to do without specific glass process knowledge. Executing performance benchmark comparisons between different manufacturing plants, even in between lines is challenging. We found that before we could start using specific process data we had to run different types of filters to rule out data errors; exercises that are extremely hard without specific process knowledge. Even though tools have become more intelligent over the years it is still not possible for them to produce decent results and solutions without human direction. However, through carrying out this kind of exercise, we found correlations that we never knew existed, which after having a closer look were perfectly valid.
The Glass industry will profit greatly from Industry 4.0 technologies due to the fact that manufacturers need to efficiently manage one of the most complex physical and chemical industrial processes around. Along with the conversion towards new melting and forming technologies it will most likely become the next big innovation step for the industry, making it more efficient and keeping it attractive to work for. However, there is nothing wrong in being traditional as long as it serves a purpose and that purpose should be to first get the foundations right, get the right people involved and the aims clear before even starting. The foundation of Industry 4.0 is the data, the people and the aims so let’s be pragmatic and start here.
About the author
René Meuleman studied electrical engineering, then began his career in the paper industry, before switching to glass. During his early years, he built his knowledge and experience through the design and development of electronic quality equipment for container glass manufacturing and was involved in first generation PLC and DCS systems, as well as electronic timing systems for IS-machines.
René worked on several model based predictive control (MPC) projects, as well as being involved in object oriented engineering method developments. He became responsible for process control inside the BSN group and was responsible for the European plant process control and forming electronics inside the Owens-Illinois group.
Ten years ago he left O-I and joined the Eurotherm by Schneider Electric group where he is responsible for technical and commercial glass business development. Based on the Eurotherm and Schneider-Electric portfolio and together with his global glass business team he works on the development of innovative, pragmatic and competitive glass manufacturing process and power control systems.
First presented at Glassman Europe 2017 in Lyon, France
Written by: René Meuleman
Presented by: Christian Megret
References: 1. Wikipedia