Get all the info you need to know about IoT in the feed industry from Repete’s experts here. To learn more about automation software, contact us today!
At the device layer: Many devices are being built with more advanced “smart” technology and moving from IO to ethernet. VFD’s are a good example of this. Many new, previously IO-only devices are moving to smart devices which will facilitate the ability to connect. Device manufacturers are pushing the move to provide connected groups of devices that can be connected and interfaced with larger systems. This has the benefit of allowing equipment providers to be able to provide proprietary control of their devices while opening them up to large populations of users and larger coordinating control systems, without the disadvantage that comes with traditional islands of service implementations.
At the control system layer: Control systems are collecting an enormous amount of data which can be used to fuel a large set of needs, such as automating orders, order routing, and formulations to plants for manufacturing, for example. Inventory management and coordination are another as well as OEE, performance analysis, food safety, and product quality observance and monitoring. Data lakes are now a thing, which provides the multi-plant corporation the ability to contrast performance, efficiency, failure rates and other aspects of mill operation across their enterprise.
At the enterprise layer: The ability to have order takers, inventory managers, operations managers, and executives be intimately aware of mill operations is now becoming very important. For example, order takers and websites can now report on where an order is in the process. Last-minute order cancelations can be prevented when a product is already in production. Operation managers can remotely observe mill operations and participate in diagnostics of mill failures without the need to be on site.
For OEE, this is essential. One aspect of OEE is the real-time observance of progress against expected production. This only makes sense if the data is in real-time. Order takers need to know if an order has been started before allowing a customer to cancel an order. Inventory needs to be accurate and inform inventory managers of the need to order or to automatically place an order in real-time when the system sees shortages coming. Operations managers need to know in real-time if a mill is down or at what rate production is occurring. Real-time data allows management to be notified before an order is shipped if it does not meet quality definitions.
Step 1: Involve a controls company that thoroughly understands the process, like Repete.
Step 2: Use the controls company to identify all known requirements for the use of data.
Step 3: Allow the controls company to help establish a data plan that covers requirements plus know use cases of which a company may not be aware.
Step 4: Join with existing IT people from the company or bring in IT people to help establish the IT infrastructure needed to support the data collection.
Step 5: Use the controls company to establish the data available and the options of additional data to be collected. Most control systems today collect data. But the customers’ needs and challenges should be considered when establishing the final data sets to be collected.
Step 6: Develop a strategy for the selection of devices based on the need for data that they capture.
Step 7: Establish the real-time nature of all data needs and have IT confirm the ability to support the needs with existing or new infrastructure.
Step 8: The data should be collected into a well-organized and related set of data that can be used for more than just initial needs identified. IE, ad hoc reporting for unforeseen or un-planned data mining.
Step 9: The short answer is to always involve an experienced professional.
This can be significant. In cases where these techniques have been used, it often leads to increased productivity, quick identification of problems, and avoidance of feed-safety claims. A more predictable production rate is another benefit of this type of data connectivity. This can range widely across companies and implementations but can help add to the bottom line or prevent losses. 10% plus efficiency increases are common and go significantly higher depending on the extent to which the data is mechanized. Avoidance of loss is potentially huge but it is harder to quantify. In one case, it prevented at least one bad load of feed per month from being delivered. In all cases, the cost to implement will be recovered over a reasonable time. This analysis can be calculated before you implement.
A better way to look at this issue is to look to the consumers. This is a very large list. It no longer matters about the presentation but how a consumer may consume it. With the advent of XML and mapping systems, this has largely become a non-issue. A short list of consumers are: OEE systems, maintenance systems, ERP systems (and all subcomponents, IE inventory, orders, adjustments, purchase orders, shipments, etc.)., data lakes, ad-hock reporting systems…the list goes on endlessly.
This answer will be too large to place here, but a short list is: production data, including the time of all production, the time of each production step, the results of production, including variances that occurred during production; how often each piece of equipment was in use; what equipment was used in each production run; how efficiently was the mill equipment used; what caused a mill down situation; how long it was down; who was operating the mill at the time of a production event or exception; who accepted a variance in production and was it in tolerance; This list is very long…
Data is initially captured by sophisticated feed mill control systems. This data is produced by equipment and processes that run the mill. The control system is essential to the process because it provides the context that allows proper organization and relation of the data in a way that will make it easier to use. For example, a run is related to all the batches of the run. A weighment is related to the batch which is related to the run. It is best practice to move this to a central repository or database such as a data lake. This prevents external access directly to the control system and thus reduces the risk of interference with control system operations. Reading data directly from a PLC without the context of the control system produces data that is difficult to use and maintain over time.
a.) Most popular control systems provide pre-built reports and data views that provide a user with the information they seek based on an understanding of what might be needed to operate a mill. These are predictable and expected data needs that have been pre-built and formatted into a convenient view. These reports and views represent a large number of data consumers that are designed to provide insight into needed information. In this case, one only needs training on the purpose of the report and the type of information it will produce. A user will need to be trained on how to request, display and print the resulting data.
b.) When pre-built reports and data views are insufficient to find an answer then ad-hock reporting is required. To be effective at ad-hock reporting one begins by gaining an understanding of the context of all captured data. In well-organized data, this means understanding the use and purpose of the data. For example, run data, batch data, weighment data, etc. Understanding how this data relates to real operations in a mill is essential. Once you understand the organization and purpose of the data the next step is to choose from the market a tool that allows one to explore and mine the data. There are many of these tools available including excel that can enable this process. With a tool in hand and a knowledge of the data, the answers to questions can be answered by mining the data.