AIA Vision Week

Vision Week Conference Registration

SIGN UP FOR YOUR AIA VISION WEEK SESSIONS

BOOKMARK THIS PAGE! The only way to return to this conference registration page is to use your BACK button, bookmark the page, or reference the link in the confirmation email you received.

Thanks for registering for AIA VISION WEEK. There are more than 30 expert presentations – including three insightful keynotes -- available May 18-22.

To make the most of your AIA VISION WEEK experience, you’ll must sign up below for EACH of the sessions you want to attend. You’ll then be emailed a confirmation email with a link that allows you to attend that VISION WEEK session and add it to your calendar. Please refer to the email link sent to you by Clarissa Carvalho to access the actual session.

Questions or issues? Call us at 734-929-3260 (M-F, 8:30am-4:30pm EDT) or email us

VISION PRODUCTS SHOWCASE: Beginning May 18, don’t forget to spend time in the Vision Products Showcase, where you can see the latest in vision and imaging technologies and connect with more than 100 leading companies. We will email you more information on how to enter the showcase once it opens to the public.

Visit the Vision Products Showcase
MONDAY, MAY 18, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Laser Triangulation 101

Mattias Johannesson, SICK

Laser Triangulation is a well proven 3D Imaging Technology. This session discuss how to get the most out of a laser triangulation measurement system, as well as discuss the limitations of the technology. We will go through optical considerations, lasers, sensors and systems, and compare with other key 3D technologies.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

10:45 am - 11:15 am ET

Artificial Intelligence - Who Will be Affected and How Will Things Change in Your Organisation

Andrew Long, Cyth Systems

How AI can be used to benefit a larger portion of the population using a planned approach to deploy AI within your organization.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

11:30 am - 12:30 pm ET

KEYNOTE: Innovative Machine Vision Applications at Procter & Gamble

Mark Lewandowski, Procter & Gamble

Mark Lewandowski, the Robotics Innovation Technical Section Head at Procter & Gamble, will discuss how the global manufacturing giant is leveraging machine vision and machine learning technologies in its latest application -- and what advances P&G sees on the horizon.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

  • KEYNOTE: Innovative Machine Vision Applications at Procter & Gamble
12:30 pm - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Choosing the Right Machine Vision Lens

Nick Sischka, Edmund Optics

As sensors continue to evolve, the landscape of lenses to match to these sensors continues to grow along with them. This development has led to a lot more lenses available on the market, and it can be tricky to know which lens to choose for which application. This talk will discuss important lens parameters, and go into detail on what these parameters actually translate to in the real world, in order to aid in the selection of lenses.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

2:15 pm - 2:45 pm ET

Using 3D Technology

Sean Pologruto, Basler

Learn about 3D techniques for the industrial space and how different concepts apply to specific scenarios. Current 3D technologies include:

  • Structure Light
  • ToF
  • LIDAR
  • Stereo
  • Elaborate on challenges that come with each platform and where they might perform best.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:00 pm - 3:30 pm ET

SWIR Imaging for Machine Vision

Martin H. Ettenberg, Princeton Infrared Technologies

This session will explore non-visible machine applications with uncooled and cooled short wave infrared (SWIR) imagers (750nm to 2500nm). This will not only include broad band imaging applications but also machine vision applications using spectroscopic signatures for quality control as well as discrimination requirements. Various uses of the technology will be introduced. Current SWIR detection equipment along with tradeoffs between those technologies in various applications will be presented.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:45 pm - 4:15 pm ET

Filters: The Key to Image Quality in Modern Vision Applications

Georgy Das, Midwest Optical Systems

Your goal is simple: acquire a clear, high-resolution, glare-free image — all while keeping costs down and ensuring that the process is repeatable. Does that sound difficult? Or even impossible? It doesn't have to be. Optical filters are a simple, cost-effective way to enhance repeat-ability and to achieve the highest level of performance from your machine vision system. Learn more about how optical filters can be used to solve even the toughest issues in machine vision applications and how filters can help you get ahead of the curve when it comes to next-generation applications.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

Tuesday, MAY 19, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Infrared Imaging for Non-Visible Quality Assurance

Jake Sigmond and Michael McKibben, FLIR Systems

The trend of decreasing cost in infrared detector technology and increased value in identifying faulty products before shipment has created innovative new solutions. Temperature differences of 1.8 degrees Fahrenheit or more are used to determine pass fail criteria for packaging, sealing, plastic or metal welding, plastic molding, flaw detection, food production, die casting, and a variety of other applications. Infrared imaging repeatably and accurately illustrates thermal patterns and gradients used to identify flaws in production processes by indicating an incomplete shape, non-uniform temperature profile, or varying gradients. Thermal data analytics assist in finding and correcting errors in the production of equipment and prevent unwanted product from making it to the customer. Many of the same concepts in vision systems such as algorithms for edge detection, blob analysis, and pattern recognition apply to infrared solutions. Industry standard communication protocols such as GenIcam, GigE Vision, RTSP, and ONVIF S allow for integration of thermal cameras with commonly used software and hardware. The parameters used in infrared camera systems, emissivity and reflectivity, streamline configuration in comparison to the complexities of lighting common in modern vision solutions. While encompassing the information mentioned in the abstract, this technology conversation covers an introduction to infrared technology, current applications and uses of infrared camera systems in quality assurance processes and touches the market trend/future of infrared in industry.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

10:45 am - 11:15 am ET

How Good is Your 3D Machine Vision Solution for Manufacturing?

Dr. Kamel Saidi, NIST

3D machine vision (or 3D perception) can be a very effective solution for applications in manufacturing that require a robot to know the precise position and orientation (pose) of a part that it needs to pick up from a bin or a conveyor. Applications such as robotic assembly at small and medium manufacturers could benefit greatly from 3D perception. Being able to measure an accurate pose of a part can reduce the reliance on jigs and fixtures (that often help with part positioning) and can open up new possibilities for mobile robot arms. The problem with 3D perception systems to date is that experience with these systems has lagged significantly behind expectations. In other words, 3D perception works great in theory, but it falls short in practice. Many users of 3D perception systems can attest firsthand to the frailties of these systems and these users often turn toward more traditional 2D machine vision solutions instead.

The above problem is common to many new or advanced technologies and it is often amplified by a lack of consensus standards for these technologies. In the absence of standards, producers of 3D perception systems can use terms and metrics of their choice to specify the performance of their products. At the same time, it is close to impossible for users of these systems to either verify the claims made in the producers' specifications or to compare one producer's 3D perception system with another's. Machine vision systems that use communications standards such as GigE Vision® and Camera Link® give users of these systems assurances that the systems will communicate in a well-defined manner with other systems. However, equivalent performance standards that can provide assurances that these systems will provide some level of measurement quality (i.e., performance) are few and far between. There is little industry agreement on the meanings of even the most basic metrics (such as "resolution," "depth error," or "frame rate") for 3D perception systems.

The ASTM International, Committee E57 on 3D Imaging Systems and the National Institute of Standards and Technology (NIST), Intelligent Systems Division, in collaboration with leading producers of 3D perception systems, researchers, and academics have developed a road-map of performance standards that are needed for these systems. As part of this work NIST also undertook an extensive market survey of commercially-available as well as emerging 3D perception systems in order to understand the 3D machine vision industry landscape. This session will present key findings from the market survey as well as results of the standards road-mapping effort. This session will also present recent results from subsequent efforts undertaken by NIST and industry partners to develop high-priority standards identified in the road-map.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

11:30 am - 12:30 pm ET

KEYNOTE: Decoding Mixed Reality for Enterprise

Rajat Gupta, Microsoft Corp.

The goal of this presentation is to inform the audience of the applicability and current state of Mixed Reality technologies in the enterprise. As companies make digital transformation decisions as part of cyclic industrial evolution and macro-economic indicators such as Covid-19, it is imperative to balance immediate deployment of new technologies with planning for the future.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

12:30 pm - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Evolution of Computational Imaging in Machine Vision

Marc M. Landman, CCS America, Inc.

Computational imaging is used in the machine vision industry to improve image quality, extract features, and output images which are not possible with conventional imaging.

In this session, we will look at how computational imaging has evolved and how it can be deployed to efficiently build more reliable vision systems. We’ll cover several techniques which can be applied to different applications and the benefits they bring to users and system builders. Using practical examples, we will review how computational imaging sequences can be performed and what components are involved in that process.

Join us to learn what kinds of inspections become possible with computational imaging.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

2:15 pm - 2:45 pm ET

Ultra High Resolution Sensors - What about the Optics?

Andreas Platz, Sill Optics

Camera manufacturers are offering larger sensor sizes with smaller pixels to achieve higher resolutions upwards of 50-100 M pixels. This results in new demands for optics which can accommodate these larger sensor sizes and smaller pixels. This presentation will provide a detailed overview of the new optical requirements for both standard, telecentric and bi-telecentric lens including trade-offs in resolution, field of view, magnification, aperture, wavelength range and color correction. Topics will include optical performance goals and practical issues including working distance, mechanical constraints, coaxial illumination and costs. Furthermore, the question, why standard lenses can reach their boundaries and for which conditions an OEM design should be considered.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:00 pm - 3:30 pm ET

A Simpler Multispectral Alternative to Applying Hyperspectral Imaging in Machine Vision

Steve Kinney, Smart Vision Lights

With the ability to detect wavelengths between 1,000 and 2,500 nm, shortwave infrared (SWIR) imagers are significantly extending image capture and analysis possibilities — especially when used in conjunction with visible sensors. Vision systems able to fuse data from both spectrums hold interesting new possibilities that were previously possible only through the use of exotic hyperspectral equipment and analysis.

One of biggest impediments to such vision systems has been the challenge of distilling complex hyperspectral data down to a reduced subset of information that would enable a more targeted multispectral approach. There is a promising solution, however: by combining arrays of narrowband LEDs spanning the visible to SWIR ranges with selective bandpass filtering techniques, it is possible to increase contrast around key wavelengths for a specific application.

This presentation will illustrate an approach to such multispectral illumination schemes, and outline how they can help reduce the cost, time, and complexity of implementing tasks previously possible only through hyperspectral technology.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:45 pm - 4:15 pm ET

New Sensors and Optical Considerations

Gregory Hollows, Edmund Optics

Session description coming soon.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

Wednesday, MAY 20, 2020
No events scheduled this day for the selected track.
9:45 am - 10:30 am ET
KEYNOTE: Vision Standards Update – Hardware Interface Standards
Bob McCurrach, Association for Advancing Automation

Join Bob McCurrach, AIA Director of Standards Development, and the Chairs of the Vision Hardware Interface Standards to learn about the latest updates. Standards Include: GigE Vision, USB3 Vision, Camera Link, Camera Link HS and CoaXPress.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

10:45 am - 11:15 am ET
Using Thermal Cameras for Elevated Body Temperature Screening
Markus Tarin, MoviTHERM

Thermal cameras are being deployed in record time for screening of elevated body temperature at airports and other public places. However, as useful and capable as this technology is, there is a lot of misinformation circulating in the news. As with the implementation of any technology, there are challenges. It is important to understand the physics involved that affect the accuracy of an optical temperature measurement, but moreover, there are many other factors to consider related to bio-physical phenomena. This talk explains how thermal imaging technology can be used to screen for people with an elevated body temperature.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

11:30 am - 12:30 pm ET

KEYNOTE: The State of the Vision Industry Executive Roundtable

Join us as we interview leading executives on the current state of the machine vision industry and the opportunities and challenges of the COVID-19 pandemic. AIA Vice President Alex Shikany will moderate this insightful round table. Attendees will have the opportunity to ask these executives questions during the live discussions.

Panelists include:

  • Samuel P. Sadoulet, President and Chief Operating Officer, Edmund Optics
  • Steve Wardell, Director of Imaging, ATS Automation and chairman of AIA Board of Directors
  • Dave Spaulding, President, Smart Vision Lights
  • Dr. Dietmar Ley, CEO, Basler

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

12:30 pm - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Advances in Unit-level Traceability in Life Science Industries

John Agapakis, Omron Automation Americas

The breadth and scope of traceability has expanded significantly over the years along with advances in technology, making it a ubiquitous and critical application in today’s world-class manufacturing operations and especially important in life science industries, such as medical device manufacturing, pharmaceutical packaging and clinical diagnostics instrumentation and lab automation. In addition to the economic justification of implementing unit-level traceability and the liability implications of mislabeling or recalling products, regulatory agencies and standard setting organizations have also introduced regulations requiring unit-level traceability, such as FDA’s UDI (Unique Device ID) and DSCSA (Drug Supply Chain Security Act) serialization mandates.

In this presentation, we will discuss both the internal justification and benefits of a traceability implementation as well as externally imposed mandates and regulations. We will also explore the evolution of traceability, and explain why the latest phase, which we refer to as “Traceability 4.0”, is not just about tracking products and components but also about optimizing productivity and quality by tying product to process parameters.

Regarding some of the specific automatic identification technologies used in traceability applications, we will review the applicability and relative advantages of imaging technologies such as 1D bar codes printed on labels and 2D codes directly marked on parts (DPM) as well as the complementary, non-line-of-sight technology of RFID. We will also discuss the MVRC (Mark-Verify-Read-Communicate) end-to-end traceability deployment methodology that helps ensure robustness in a traceability implementation, and will specifically highlight new integrated machine vision solutions for in-line inspection and print/mark quality verification for every label or part as it is being printed or marked.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

2:15 pm - 2:45 pm ET

Diagnostic Imaging and Functional Technology to Detect Acute and Chronic Marijuana Impairment

Dr. Denise Valenti, IMMAD

The visual system was studied in one of the largest collection of quality research on human performance with marijuana. The work was done by optometrists from the University of Berkeley in the early 1970's. This research and more contemporary work using electrodiagnostic techniques and complex visual fields will be discussed as well as future approaches with imaging.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:00 pm - 3:30 pm ET

How to Build FDA Approved Medical Imaging Applications

Darcy Bachert, Prolucid

Most companies looking to build medical imaging applications will need to go through the process of FDA approval. In this presentation we look at some of the key differentiating factors that make medical-grade applications unique, and approaches you can take to streamline regulatory approvals, and ultimately simplify and accelerate your path to market.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:45 pm - 4:15 pm ET

Imaging Solutions to Provide Zero Fault Inspections in the Biomedical and Pharmaceutical Industry

Luca Bonato, Opto Engineering

Nowadays biomedical and pharmaceutical industries must fully comply with increasingly demanding standards especially concerning zero fault policies. The recent COVID-19 wave contributed to push even harder on the request for increased speed and quality assurances.

 

For such sectors, machine vision plays an essential role in implementing 100% products inspection. For example, in the manufacturing process of vials it is essential to avoid contamination: this is achieved by checking for crimping of aluminum seal, cracks and scratches on the glass surface, missing flip-off, vial integrity etc.

 

Reliability of the inspection system is therefore critical: this is why a smart solution can be implemented by choosing wisely the optical components for the system, saving both space in the manufacturing line and processing time in software algorithms.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

Thursday, MAY 21, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Real World Challenges for AI in Vision Applications

Dany Longval, Teledyne Imaging

Dany Longval, Vice President of Sales for Teledyne Lumenera, will address how advancements in artificial intelligence, particularly in deep learning, have accelerated the proliferation of vision-based applications.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

10:45 am - 11:15 am ET

Deep Learning for Quality Inspection: 2 Case Studies

Stephen Welch, Mariner USA

Despite making significant investments in machine vision systems to detect and classify defects, manufacturers experience high false positive rates. While machine vision systems may include high quality optics, effective lighting and high-resolution image capture, their defect detection methods are incapable of the accuracy of modern deep learning technology. As a result, human effort is required to compensate for the system’s inaccuracy. High false positive rates also prohibit further robotic automation. In this presentation, Stephen will compare traditional machine vision technology with deep learning . He will share two case studies from the automotive industry where a real-time deep learning vision system was used to improve existing machine vision systems’ accuracy. He will show how he improved visual inspection accuracy by 20x – 30x for these companies by using a the cloud and a ResNet-based architecture. Stephen will also share how the effort fits into the broader context of deep learning, address the specific complexities of building, deploying, and maintaining deep learning based systems on the edge in manufacturing environments, and detail how these process improvements drive significant ROI for manufacturers.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

11:30 am - Noon ET

Accelerating AI Camera Development

Quenton Hall, Xilinx

Fueled by a trifecta of rapid advances in network training, big data, and ML research, so-called "Deep Learning" is rapidly becoming mainstream, especially in embedded vision applications where the end game is teaching machines to "see". However, Convolutional Neural Network (CNN) inference is computationally expensive, requiring billions of operations per inference. Moreover, many critical applications require extremely low latency and must support high frame rates. Given these constraints, and given a need for sub-10W power consumption, high-reliability, security, and product longevity, how do we design an integrated camera which can provide the required levels of ML inference performance?

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

Noon - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Embedded Learning and the Evolution of Machine Vision

Jonathan Hou, Pleora Technologies

One of the largest influences on the future of embedded vision is artificial intelligence and machine learning, where complex systems can learn from collected data and make decisions with little to know human intervention. In this presentation, Jonathan Hou will discuss the trend towards machine learning for vision and sensor networking applications. This will include a comparison of traditional vision inspection and the advantages of AI and machine learning, the evolution of embedded platforms for vision, and an overview on how advanced sensors including hyperspectral and 3D can be used to augment inspection and detection applications. In particular, the presentation will focus on how system designers and developers can integrate and leverage “plug-in” machine learning and AI capabilities within existing applications as an evolutionary path towards fully networked Internet of Things and Industry 4.0 applications.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

2:15 pm - 2:45 pm ET

How Machine Vision is Enabling Smart Manufacturing

Will Healy III, Balluff

As we are swept into the fourth industrial revolution, you want to be a company that comes out on top; but at the current pace of change, are we doing the right things? Are we using the right technology? With case studies and articles, we will explore why manufacturers big & small are investing in the Industrial Internet of Things (IIoT), break down the basics of smart manufacturing and discuss the key role machine vision plays in enabling this revolution. With a look to how guidance (VGR), inspection, gauging and identification applications are creating an Industry 4.0 factory, we will offer simple actions you can make today to start enabling your factory for flexible manufacturing & efficient production, and you will leave empowered with confidence in machine vision as the enabling technology for your next smart manufacturing project.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:00 pm - 3:30 pm ET

Computer Vision in the Time of COVID

Eric Danzinger, Invisible AI

As COVID-related factors disrupt operations and create unforeseen challenges, learn how computer vision can keep business and manufacturing moving. Increased attrition combined with travel restrictions has disrupted access and visibility across businesses. With the rapid maturation of computer vision over the past couple of years, intelligent camera systems can solve these problems. Quality engineers can perform root cause analysis, supervisors can provide spot-training, and managers can get necessary data and insight to optimize their operations all remotely. Learn how computer vision can mitigate these new challenges and improve productivity, efficiency, and safety across your workforce.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

Friday, MAY 22, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Machine Learning-enabled Robotics Vision in Warehouses and Factories

Bastiane Huang, OSARO

Machine learning has enabled a move away from manually programming robots to allowing machines to learn and adapt to changes in the environment. We will discuss how machine learning is currently used to enhance robotics vision and allow robots to be used in new use cases for other industries such as warehouse, manufacturing and food assembly.

We will also describe recent progress in deep learning, imitation learning, reinforcement, etc. and discuss the real world requirements and challenges of various industrial problems, pipe lined versus end to end systems, and the technology that companies in this space have developed as they address the challenges in robotics vision.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

10:45 am - 11:15 am ET

Advances in Omnidirectional Cameras and Its Impact on Robotics

Rajat Aggarwal, DreamVu

Automation is changing the global landscape by redefining the way we have imagined machines and what they can do for us. The future of our warehouses, industries, homes, transport systems, healthcare will be very different from what we currently see. The need for complete autonomy has triggered a lot of innovation in the sensor world, depth sensing to be specific. We have seen many new players come to the market with different approaches towards creating ideal sensors. A lot of LiDAR companies have sprung up, in most cases, by demonstrating an incremental improvement over the incumbent. What seems missing is the focus on scalable and an efficient solution which can enable these autonomous systems with navigation and understanding in highly cluttered and dynamic environments. Currently, sensing is a bottle-neck for scaling. In this talk, we will discuss the advancements in omnidirectional cameras that capture 360-degree light-field of a scene and how can they bridge the gap between what current sensors are offering and what the end application demands.

Cameras for omnidirectional imaging typically use a few limited field-of-view lenses to span a large section of the spherical view around the camera centre. Camera rigs or mosaics that capture small segments and stitch 360-degree panoramas are also prevalent. Cameras with multiple sensors suffer from a range of issues related to high thermal output, complex synchronization protocols and compute extensive stitching. Other devices for omni-directional odometry use hyper-spectral and laser imaging components such as infra-red, Lidar and ToF cameras. These cameras, while more accurate in sensing the environment, suffer from the sparsity of captured information and limited temporal resolution due to moving parts. A single-sensor omnidirectional depth camera can alleviate all these problems comprehensively.

An ideal omnidirectional camera, with the capability of imaging the full spherical field-of-view around the camera, can significantly improve the machine vision capabilities. It can bring multifold advantages in terms of cost, compute, and power requirements over other competing sensing technologies. An ideal solution in this context should: 1) capture rich (RGB+D), dense, omnidirectional information in real-time without blind spots, leading to semantic scene understanding and 3D scene structure recovery, 2) use minimal number of physical sensors and avoid moving parts to reduce errors due to misalignment, synchronicity and thermal caps, and 3) have appropriate form-factor, weight and power requirements to enable easy integration into typical robots and drones of varying size and shape. Also, the sensor should be computationally self-sufficient. These requirements pose severe challenges to the design of omnidirectional cameras. Such a camera can significantly alter the landscape of the automation industry especially robotics. Unlike autonomous vehicles that use GPS, indoor robots rely solely on sensors to navigate around their environments. To date, unmanned rovers moving cartons around the warehouse have been weighed down with expensive sensors to capture its spatial understanding within the facility to avoid collisions with humans and other equipment. From full 360° localization and mapping to uniform situational awareness and dynamic response, these cameras can enable uninhibited interaction and interoperability between humans and robots.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

11:30 am - Noon ET

The Biggest Challenges of Bin Picking

Jan Zizka, Photoneo

3D vision-guided automation is still a new field of industry which means that we constantly need to face newly emerging challenges. This presentation will cover the biggest challenges of bin picking as we know it today by trying to provide answers to questions that shape the direction of the latest advances in the field. Has the development of 2D and 3D technology reached the peak or, on the contrary, is there still room for major advancements? How much fine-tuning do 3D machine vision systems need to undergo to make robot performance 100% reliable? This requires a perfect interplay between high scan accuracy, resolution and scanning speed. To what extent are the developers of 3D machine vision solutions able to meet these requirements and what does the current market have to offer?

From a general perspective, all 3D cameras and 3D scanners available on the market are based on technologies that can be divided into three main categories: time-of-flight, stereo vision and technologies based on emitting a structured light pattern. How do they differ and which one is most effective? Though we may lay down multiple distinction criteria to differentiate between various 3D sensing techniques, the most meaningful one for the next-generation automation seems to be the ability to scan moving objects in sufficient quality. Here comes into play a new, 4th method - the revolutionary Parallel Structured Light technology patented by Photoneo, which is based on a specially designed CMOS image sensor with a mosaic shutter. This method allows Photoneo’s 3D camera MotionCam-3D to scan objects in rapid motion. In what way is this method so breakthrough and how does it change the traditional concept of the possible in 3D imaging? The growing importance of and advances in high-quality image acquisition of 3D scenes is inextricably linked to the rise of industrial automation and robotisation. Thanks to the advancements in robot vision, automation of manufacturing processes has entered a completely new dimension. Yet the complexity of applications that need to be automated poses increasingly difficult challenges to the machine vision. The most common bin picking use cases include applications in industrial production, palletisation, de-palletisation and robotic manipulation. Gradually 3D machine vision crosses the borders of industrial production and enters spheres such as the food industry.

The field of AI and Machine Learning is also moving forward in giant leaps and, in combination with 3D machine vision systems, finds an increasingly wide array of industrial applications. What methods are there for CAD-based matching on the one hand and picking of unknown items on the other, and where are they currently used and might potentially be used in the future? Another important feature in the context of industrial automation is path planning as it is necessary for autonomous robot performance. Where does its development currently stand and how many companies rely on this robotic “ability”? And finally, innovations in the field of industrial automation in general and bin picking solutions in particular include new approaches to grasping methods as well as efforts to shorten the cycle times of object picking and placing. Which advancements in this area can we already enjoy and what remains subject to improvement? Answers to these questions come in the form of an insightful overview of current trends in the development of 3D machine vision technologies and solutions applied in bin picking applications.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

Noon - 1:30 pm ET

BREAK

1:30 pm - 2:00 pm ET

Building Advanced Robotics Applications Quickly: Vision Sensor Integration with Robot Operating System

Katherine Scott, Open Robotics

Robot Operating System (ROS) is a collection of free and open-source software packages used by a large and growing developer community to build, simulate, and test robotic systems. This community is rapidly growing and includes a number of Fortune 500 companies, autonomous car and truck companies, government entities and universities. According to ABI Research, 55% of robots shipped in 2024 will include at least one ROS package [1]. Just as Linux overtook proprietary vendors in the cloud computing market, ROS is poised to supplant closed source systems in the development of advanced robotic applications. Despite this, vision sensor support for ROS remains ad-hoc; only a handful of vendors support official ROS packages. Not only does this lack of support slow the process of research and development, it makes application development more difficult for the end-user as well as exposing them to higher risks in their deployment. In this talk we will cover the basics of ROS; what it is, how it works, and what it is used for. Specifically, we will show how ROS can be used to quickly simulate an imaging sensor and benchmark its performance in an application. Following from this simulated environment we will show how the simulation environment can be easily ported to actual hardware, calibrated, and then integrated into a more complex application. Following from our toy example, the talk will then cover existing ROS 1 and ROS 2 vision capabilities, and the packages currently maintained by the community. This portion of the talk will discuss how vendors or users who wish to contribute open sensor drivers can properly configure their source code repositories for the best out of the box experience and rapid adoption into the ROS community. Moreover, we will discuss how a good vision sensor package makes it possible to rapidly develop complex computer vision controlled robotics applications. [1] https://www.bloomberg.com/press-releases/2019-05-16/the-rise-of-ros-nearly-55-of-total-commercial-robots-shipped-in-2024-will-have-at-least-one-robot-operating-system-package

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

2:15 pm - 2:45 pm ET

High-Speed Bin Picking via Commercial Cameras and AI

Paul Thomas, PE, Procter and Gamble

This presentation will take you on a journey, demonstrating highly accurate and high speed pick and place of consumer goods. Through the use of commercially available hardware and in-house Deep Learning software, travel through concept, data collection, training and execution. See some secrets from under the hood enabling this idea to become reality.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:00 pm - 3:30 pm ET

The Future of Robot Safety: From Collaborative Robots to Collaborative Applications through Advanced Vision

Clara Vu, Veo Robotics

When people imagine what factories will look like in the future, many of us picture a “lights out factory” with machines humming all day and night with no people in sight. But transforming factories, particularly to make them more flexible, will mean physically bringing together humans and robots. The most flexible machine in a factory is a robot and the most flexible resource is a human. Industrial robots are powerful, precise, and repeatable, but they don’t have the flexibility, intelligence, dexterity, and judgment of humans, and they aren’t going to any time in the foreseeable future. The best way to make manufacturing flexible is to let robots and people work closely together, each doing what they do best.

Popular perception of industrial collaborative robot systems centers on the robot itself, which is often a particular type of robot called Power and Force Limited (PFL). However, PFL is only one means of achieving safe collaboration, and it only addresses a subset of the risks involved in collaborative applications. Another approach that is growing in popularity is Speed and Separation Monitoring (SSM), which addresses some of PFL robots’ shortcomings.

Collaborative applications using SSM have fewer limitations on end effector design, robot speed, and payload. However, their implementations increase the complexity of the overall system because they require the integration of advanced 3D vision sensing systems and the computation of protective separation distances. Future intelligent vision sensing systems must reduce the burden of calculations on the integrator, providing a holistic approach to workcell safety.

This talk will cover the possibilities of flexible manufacturing that human-robot interaction can enable and the technical and robotic vision challenges that it raises, and what it means to create a safe collaborative workcell. We will discuss how Veo Robotics is addressing these challenges using advanced 3D safety-rated Time-of-Flight vision technology for Speed and Separation Monitoring. The talk will conclude with an examination of the impact that human-robot collaboration will have on manufacturing from flexible factory to continuously adaptive factory.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.

3:45 pm - 4:15 pm ET

Random SKU Depalletizing Using Vision and AI

Bryan Knott, ABB Robotics

For this presentation we will describe the problem and the offered solution - using proprietary camera and AI system to allow the robot system to unload pallets without needing to be taught the box sizes or pallet patterns.

Register to View Recording

Please allow up to 30 minutes after the end of the webinar for the recording to become available.