• Font Size:
  • A
  • A
  • A

News

“OPC Vision” Release Candidate Presented

Silicon Software GmbH

In the future, the two worlds of image processing and industrial automation, heretofore separate, will interlock seamlessly and in real time without the use of proprietary interfaces. This is the result of the “OPC UA Companion Specification for Machine Vision” (OPC Vision for short), whose release candidate was presented at the most recent IVSM (International Vision Standards Meeting) standardization meeting as well as at the automatica 2018 trade show, meeting with broad approval. Image processing will constitute an even more important backbone of Industry 4.0 within networked production.

OPC Unified Architecture (Open Platform Communications OPC UA) is an interoperability standard and communication protocol for the secure and reliable as well as manufacturer-, platform- and hierarchy-independent exchange of information from the smallest sensor up to the enterprise IT level and the cloud. The standard has been named by the Industry 4.0 platform as its preferred software interface for realizing Industry 4.0 and is being implemented both as a general basis version as well as in versions specific to the additional specifications of industries and technologies, such as OPC Vision for image processing. This specification is planned for the factory floor as well as for industrial markets in general.

The goal is integration of intelligent image processing components up to entire image processing systems in industrial automation (production control, machine networks and IT systems) via OPC UA, enabling machine vision to communicate with the entire factory. A generic interface is being created for image processing systems at the user level, including semantic description of image data. Two years ago, on the occasion of the automatica trade fair, work on OPC Vision began with the signing of a memorandum of understanding between the VDMA Machine Vision Group and the OPC Foundation. During the May 2018 IVSM in Frankfurt, the Working Group’s suggestion for a first release candidate of the specification gained broad support. Recently, this was presented to the public at this year’s automatica. The specification’s technology is compatible with Industry 4.0 and with industry standard GenICam (Generic Interface for Cameras), with a focus on the software model’s semantics.

OPC Vision Network graphic

Image processing systems play a prominent role in industrial production, though the collected and interpreted data are of greater importance, as they are needed at multiple points in the production process for process optimization to test product quality, identify components, monitor conditions, and regulate procedures, to name a few examples. Systems range from vision sensors to (embedded) smart cameras up to multi-computer set ups. The output of an image processing system encompasses, depending upon the task at hand, multiple types of digital images such as 1D scanner lines, 2D images, 3D point clouds, and image sequences in the visible and invisible ranges (ultraviolet, infrared, X-ray, radar, ultrasound, etc.) potentially as complete resulting images, metadata, or results.

Uniform communication of decentralized components

In production, such a system acts within the environment of a programmable logic control (PLC) that sends a start signal and acquires result information in return, for example, or that displays operational status as well as readiness. Communication that previously had been based on different protocols between them will now be standardized. Hierarchical structures and interfaces are being disbanded in favor of a horizontally and vertically integrated Gigabit Ethernet IT network consisting of a large number of components that function as decentralized cyberphysical systems that in future will be able to replace central controls. First, OPC UA will supplement existing interfaces such as field busses, perhaps even replacing them completely at a later point.

OPC UA is based upon object-oriented information modeling, service-oriented architecture, inherent data security, and rights management, as well as a platform-independent communication stack that extends to the user level. OPC UA describes data, functions, and services of (embedded) devices and machines as well as the data transport as client-server architecture for machine-to-machine communication. As a data modeling model, OPC Vision contains industry-specific semantics as universally valid definitions, analogous to GenICam SFNC (Standard Features Naming Convention) for images where image data formats are precisely defined. In so doing, multimodal sensor data can be concatenated more simply and image processing components can communicate better with one another.

OPC UA networking graphic

The data described semantically via OPC UA can be understood and used by any OPC UA-capable device. A single server such as an image processing system, for example, is sufficient to administer production and process data, alarms, events, historical data and program calls. A client such as an SPS, SCADA, MES or ERP system can directly call a server’s methods and receives a return value, such as the number of saleable products or ordering requirements, as a result. Thus, the ERP system is in a position to determine frame grabber properties or the status of the entire image processing system. Continuous image processing system streams represent a special case and can be retrieved by clients via events. Every function required in an image processing system is abstracted in the information model. Special manufacturer-specific applications can be added as manufacturers provide additional services that mirror the special abilities of their image processing systems.

Real time-capable OPC UA client-server network

OPC Vision provides a framework in which image processing systems within a production environment take on various control statuses vis-à-vis other network components, in particular PLC, and implement consistent data for communication to these components (transfer of tasks and confirmation of results). Control status changes are communicated by the client or generated internally and communicated back to the client. In the event of errors during image processing, the image processing system issues an error or warning alert which is represented in the specification as a general regulation (structure of the alert and its interaction with the control status).

The division between IT and real time-capable machine network that existed before is nullified by two OPC UA expansions: In the publisher-subscriber model, a publisher (server) sends information to the network which can be “subscribed to” by several clients. The combination with time-sensitive networking (TSN) for industrial ethernet networks makes this information real time-capable. Time-critical data can thus be transferred via a common network, in order to transfer camera images as a stream, for example, as well as to operate cameras, frame grabbers and PLC concurrently via OPC UA. Currently, communication nets are not designed to transmit large amounts of image data, but at the protocol level, the building blocks have already been laid in this regard for the future.

OPC Vision and OPC UA networking graphic

Which advantages will OPC Vision provide for the future? For one, network components such as image processing devices will be based upon a single communication protocol, enabling faster integration of these devices as well as of the software environment into the factory floor network, PLC software, human-machine interfaces (HMIs) and IT systems. The protocol can be expanded specific to the manufacturer. 3D image acquisitions such as those for high-resolution 3D inline testing of microcomponents will become part of the standard only at a later point in time. For another, uniform semantics will encourage development of generic HMI interfaces for image processing devices. Moreover, data possess a higher range as a whole since any applications benefit from the data, something that is essential for machine engineering.

Simple integration of embedded cameras and sensors into the production line represents a further advantage for novel applications in the field of autonomous drones in warehouses, cobots, or HMI. For embedded devices, image processing applications can be programmed in very little time using the VisualApplets FPGA graphical development environment to flexibly equip them with versatile intelligence (via deep learning as well), to reduce resulting images’ high data volumes using image preprocessing, and to directly monitor actuators.

 

Search AIA:


Browse by Products:


Browse by Company Type: