AIA Vision Week

Join us for AIA Vision Week for machine vision education!

May 18-22, 2020

REGISTER NOW

ABOUT THE VIRTUAL CONFERENCE

In mid-May, AIA hosted a full week of virtual educational conference sessions, enlightening keynote speakers and connection to the industry’s top suppliers showcasing the latest vision and imaging technologies.

These sessions will remain available online until July 31, 2020 for you to watch for FREE. Anyone working with vision and imaging technologies – or those who would like to – are encouraged to register to watch the pre-recorded sessions.

You’ll get access to educational sessions that are taught by leading vision experts where you’ll learn how vision can help you increase profitability, improve throughput, reduce defects, comply with regulations, solve your automation problems and more!

Also available until July 31, 2020 is the Vision Products Showcase, where you can see the latest in vision and imaging technologies and connect with more than 100 leading companies. You can learn about their technology innovations and how they can help your company successfully deploy vision to increase your quality, efficiency and global competitiveness. 

When you register, you’ll receive an email confirmation (from “Vision Week Team” events@a3automate.org) with the link to sign up for the individual conference sessions you’d like to view (live or recorded), plus another link to access the Vision Products Showcase. Be sure to save this email and/or bookmark the pages so you can easily access the program throughout the week. 

It is useful to note that each conference session that you sign up for will email you a Go-To-Webinar link for that session (the email is from “Clarissa Carvalho” customercare@gotowebinar.com).

Already Registered?  Sessions are available to watch for FREE until July 31, 2020. Please refer to the links sent in your confirmation email from “Vision Week Team” events@a3automate.org. There are two links – one to sign up for Conference Sessions and one to visit the Vision Products Showcase.

If you’ve already signed up for conference sessions and are ready to attend live or recorded, use the link(s) for each session emailed from “Clarissa Carvalho” customercare@gotowebinar.com.

Questions or issues? Call us at 734-929-3260 (M-F, 8:30am-4:30pm EDT) or email us

 

Vision Week 2020 - Join us for a Virtual Event

REGISTER NOW


VISION PRODUCTS SHOWCASE

Are you looking for specific vision components or just want to see the exciting new innovations in vision and imaging? Visit the Vision Products Showcase all week to connect with 100+ suppliers of vision components and systems.

You’ll learn about the companies and their technologies, and view products and systems that can help you with your unique challenges.

Create your own “Showcase Planner” so you can directly connect with companies or products in which you are interested. AIA Vision Week can help you improve your business now or prepare you – as the COVID-19 crisis subsides -- to ramp up to take advantage of backlogged orders and increased automation demands.

Register now to access the Vision Products Showcase.

When you register, you’ll receive an email confirmation (from “Vision Week Team” events@a3automate.org) with the link to sign up for the individual conference sessions you’d like to view (live or recorded), plus another link to access the Vision Products Showcase. Be sure to save this email and/or bookmark the pages so you can easily access the program throughout the week. 

Already Registered?  Sessions are available to watch for FREE until July 31, 2020. Please refer to the links sent in your confirmation email from “Vision Week Team” events@a3automate.org. There are two links – one to sign up for Conference Sessions and one to visit the Vision Products Showcase.

Questions or issues? Email us or call us at 734-929-3260 (M-F, 8:30am-4:30pm EDT).

 

MONDAY, MAY 18, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Laser Triangulation 101

Mattias Johannesson, SICK
  • Getting Started with Vision

Mattias Johannesson

Senior Expert 3D Vision, SICK

Laser Triangulation 101

Laser Triangulation is a well proven 3D Imaging Technology. This session discuss how to get the most out of a laser triangulation measurement system, as well as discuss the limitations of the technology. We will go through optical considerations, lasers, sensors and systems, and compare with other key 3D technologies.

10:45 am - 11:15 am ET

Artificial Intelligence - Who Will be Affected and How Will Things Change in Your Organisation

Andrew Long, Cyth Systems
  • Getting Started with Vision

Andrew Long

CEO, Cyth Systems

Artificial Intelligence - Who will be affected and how will things change in your organisation

How AI can be used to benefit a larger portion of the population using a planned approach to deploy AI within your organization.

11:30 am - 12:30 pm ET

KEYNOTE: Innovative Machine Vision Applications at Procter & Gamble

Mark Lewandowski, Procter & Gamble

  • Getting Started with Vision

Mark Lewandowski

Robotics Innovation Technical Section Head,Procter & Gamble

KEYNOTE: Innovative Machine Vision Applications at Procter & Gamble

Mark Lewandowski, the Robotics Innovation Technical Section Head at Procter & Gamble, will discuss how the global manufacturing giant is leveraging machine vision and machine learning technologies in its latest application -- and what advances P&G sees on the horizon.

12:30 pm - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Choosing the Right Machine Vision Lens

Nick Sischka, Edmund Optics
  • Getting Started with Vision

Nick Sischka

Manager of Sales Operations, Imaging, Edmund Optics

Choosing the Right Machine Vision Lens

As sensors continue to evolve, the landscape of lenses to match to these sensors continues to grow along with them. This development has led to a lot more lenses available on the market, and it can be tricky to know which lens to choose for which application. This talk will discuss important lens parameters, and go into detail on what these parameters actually translate to in the real world, in order to aid in the selection of lenses.

2:15 pm - 2:45 pm ET

Using 3D Technology

Sean Pologruto, Basler
  • Getting Started with Vision

Sean Pologruto

Applications Engineer, Basler

Using 3D Technology

Learn about 3D techniques for the industrial space and how different concepts apply to specific scenarios. Current 3D technologies include:

  • Structure Light
  • ToF
  • LIDAR
  • Stereo
  • Elaborate on challenges that come with each platform and where they might perform best.
3:00 pm - 3:30 pm ET

SWIR Imaging for Machine Vision

Martin H. Ettenberg, Princeton Infrared Technologies
  • Getting Started with Vision

Martin H. Ettenberg

President & CEO, Princeton Infrared Technologies

SWIR Imaging for Machine Vision

This session will explore non-visible machine applications with uncooled and cooled short wave infrared (SWIR) imagers (750nm to 2500nm). This will not only include broad band imaging applications but also machine vision applications using spectroscopic signatures for quality control as well as discrimination requirements. Various uses of the technology will be introduced. Current SWIR detection equipment along with tradeoffs between those technologies in various applications will be presented.

3:45 pm - 4:15 pm ET

Filters: The Key to Image Quality in Modern Vision Applications

Georgy Das, Midwest Optical Systems
  • Getting Started with Vision

Georgy Das

Technical Training Manager, Midwest Optical Systems

Filters: The Key to Image Quality in Modern Vision Applications

Your goal is simple: acquire a clear, high-resolution, glare-free image — all while keeping costs down and ensuring that the process is repeatable. Does that sound difficult? Or even impossible? It doesn’t have to be. Optical filters are a simple, cost-effective way to enhance repeat-ability and to achieve the highest level of performance from your machine vision system. Learn more about how optical filters can be used to solve even the toughest issues in machine vision applications and how filters can help you get ahead of the curve when it comes to next-generation applications.

Tuesday, MAY 19, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Infrared Imaging for Non-Visible Quality Assurance

Jake Sigmond, FLIR Systems
  • Advances in Machine Vision Integration & Applications

Jake Sigmond

Application Sales Engineer, FLIR Systems

Infrared Imaging for Non-Visible Quality Assurance

The trend of decreasing cost in infrared detector technology and increased value in identifying faulty products before shipment has created innovative new solutions. Temperature differences of 1.8 degrees Fahrenheit or more are used to determine pass fail criteria for packaging, sealing, plastic or metal welding, plastic molding, flaw detection, food production, die casting, and a variety of other applications. Infrared imaging repeatably and accurately illustrates thermal patterns and gradients used to identify flaws in production processes by indicating an incomplete shape, non-uniform temperature profile, or varying gradients. Thermal data analytics assist in finding and correcting errors in the production of equipment and prevent unwanted product from making it to the customer. Many of the same concepts in vision systems such as algorithms for edge detection, blob analysis, and pattern recognition apply to infrared solutions. Industry standard communication protocols such as GenIcam, GigE Vision, RTSP, and ONVIF S allow for integration of thermal cameras with commonly used software and hardware. The parameters used in infrared camera systems, emissivity and reflectivity, streamline configuration in comparison to the complexities of lighting common in modern vision solutions. While encompassing the information mentioned in the abstract, this technology conversation covers an introduction to infrared technology, current applications and uses of infrared camera systems in quality assurance processes and touches the market trend/future of infrared in industry.

10:45 am - 11:15 am ET

How Good is Your 3D Machine Vision Solution for Manufacturing?

Dr. Kamel Saidi, NIST
  • Advances in Machine Vision Integration & Applications

Dr. Kamel Saidi

Group Leader, National Institute of Standards and Technology

How good is your 3D machine vision solution for manufacturing?

3D machine vision (or 3D perception) can be a very effective solution for applications in manufacturing that require a robot to know the precise position and orientation (pose) of a part that it needs to pick up from a bin or a conveyor. Applications such as robotic assembly at small and medium manufacturers could benefit greatly from 3D perception. Being able to measure an accurate pose of a part can reduce the reliance on jigs and fixtures (that often help with part positioning) and can open up new possibilities for mobile robot arms. The problem with 3D perception systems to date is that experience with these systems has lagged significantly behind expectations. In other words, 3D perception works great in theory, but it falls short in practice. Many users of 3D perception systems can attest firsthand to the frailties of these systems and these users often turn toward more traditional 2D machine vision solutions instead.
 

The above problem is common to many new or advanced technologies and it is often amplified by a lack of consensus standards for these technologies. In the absence of standards, producers of 3D perception systems can use terms and metrics of their choice to specify the performance of their products. At the same time, it is close to impossible for users of these systems to either verify the claims made in the producers’ specifications or to compare one producer’s 3D perception system with another’s. Machine vision systems that use communications standards such as GigE Vision® and Camera Link® give users of these systems assurances that the systems will communicate in a well-defined manner with other systems. However, equivalent performance standards that can provide assurances that these systems will provide some level of measurement quality (i.e., performance) are few and far between. There is little industry agreement on the meanings of even the most basic metrics (such as “resolution,” “depth error,” or “frame rate”) for 3D perception systems.
 

The ASTM International, Committee E57 on 3D Imaging Systems and the National Institute of Standards and Technology (NIST), Intelligent Systems Division, in collaboration with leading producers of 3D perception systems, researchers, and academics have developed a road-map of performance standards that are needed for these systems. As part of this work NIST also undertook an extensive market survey of commercially-available as well as emerging 3D perception systems in order to understand the 3D machine vision industry landscape. This session will present key findings from the market survey as well as results of the standards road-mapping effort. This session will also present recent results from subsequent efforts undertaken by NIST and industry partners to develop high-priority standards identified in the road-map.

11:30 am - 12:30 pm ET

KEYNOTE: Decoding Mixed Reality for Enterprise

Rajat Gupta, Microsoft Corp.

  • Advances in Machine Vision Integration & Applications

Rajat Gupta

Director, Business Development - AI and MR, Microsoft Corp.

Decoding Mixed Reality for Enterprise

The goal of this presentation is to inform the audience of the applicability and current state of Mixed Reality technologies in the enterprise. As companies make digital transformation decisions as part of cyclic industrial evolution and macro-economic indicators such as Covid-19, it is imperative to balance immediate deployment of new technologies with planning for the future.

12:30 pm - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Evolution of Computational Imaging in Machine Vision

Marc M. Landman, CCS America, Inc.
  • Advances in Machine Vision Integration & Applications

Marc M. Landman

Senior Technical Advisor, CCS America, Inc.

Evolution of Computational Imaging in Machine Vision

Computational imaging is used in the machine vision industry to improve image quality, extract features, and output images which are not possible with conventional imaging.

 

In this session, we will look at how computational imaging has evolved and how it can be deployed to efficiently build more reliable vision systems. We’ll cover several techniques which can be applied to different applications and the benefits they bring to users and system builders. Using practical examples, we will review how computational imaging sequences can be performed and what components are involved in that process.

 

Join us to learn what kinds of inspections become possible with computational imaging.

2:15 pm - 2:45 pm ET

Ultra High Resolution Sensors - What about the Optics?

Andreas Platz, Sill Optics
  • Advances in Machine Vision Integration & Applications

Andreas Platz

Product Manager Machine Vision, Sill Optics GmbH

Ultra High Resolution Sensors - What about the Optics?

Camera manufacturers are offering larger sensor sizes with smaller pixels to achieve higher resolutions upwards of 50-100 M pixels. This results in new demands for optics which can accommodate these larger sensor sizes and smaller pixels. This presentation will provide a detailed overview of the new optical requirements for both standard, telecentric and bi-telecentric lens including trade-offs in resolution, field of view, magnification, aperture, wavelength range and color correction. Topics will include optical performance goals and practical issues including working distance, mechanical constraints, coaxial illumination and costs. Furthermore, the question, why standard lenses can reach their boundaries and for which conditions an OEM design should be considered.

3:00 pm - 3:30 pm ET

A Simpler Multispectral Alternative to Applying Hyperspectral Imaging in Machine Vision

Steve Kinney, Smart Vision Lights
  • Advances in Machine Vision Integration & Applications

Steve Kinney

Director of Engineering, Smart Vision Lights

A Simpler Multispectral Alternative to Applying Hyperspectral Imaging in Machine Vision

With the ability to detect wavelengths between 1,000 and 2,500 nm, shortwave infrared (SWIR) imagers are significantly extending image capture and analysis possibilities — especially when used in conjunction with visible sensors. Vision systems able to fuse data from both spectrums hold interesting new possibilities that were previously possible only through the use of exotic hyperspectral equipment and analysis.

 

One of biggest impediments to such vision systems has been the challenge of distilling complex hyperspectral data down to a reduced subset of information that would enable a more targeted multispectral approach. There is a promising solution, however: by combining arrays of narrowband LEDs spanning the visible to SWIR ranges with selective bandpass filtering techniques, it is possible to increase contrast around key wavelengths for a specific application.

 

This presentation will illustrate an approach to such multispectral illumination schemes, and outline how they can help reduce the cost, time, and complexity of implementing tasks previously possible only through hyperspectral technology.

3:45 pm - 4:15 pm ET

New Sensors and Optical Considerations

Gregory Hollows, Edmund Optics
  • Advances in Machine Vision Integration & Applications

Gregory Hollows

Vice President, Edmund Optics

New Sensors and Optical Considerations

Session description coming soon.

Wednesday, MAY 20, 2020
No events scheduled this day for the selected track.
9:45 am - 10:30 am ET
KEYNOTE: Vision Standards Update – Vision Interface Standards
Bob McCurrach, Association for Advancing Automation
  • Vision & Medical Applications
 

Bob McCurrach

AIA Director of Standards Development, Association for Advancing Automation

Vision Standards Update – Vision Interface Standards

Join Bob McCurrach, AIA Director of Standards Development, and the Chairs of the Vision Hardware Interface Standards to learn about the latest updates. Standards Include: GenICam, GigE Vision, USB3 Vision, Camera Link, Camera Link HS and CoaXPress.

 

Panelists include:

  • Friedrich Dierks, GenICam Chair and Director Research and Development, Basler
  • Eric Bourbonnais, GigE Vision Chair and Software Design Leader, Teledyne Imaging
  • Eric Gross, USB3 Vision Chair and Senior Engineer, National Instruments
  • Mike Miethig, Camera Link HS Chair and Technical Manager, Teledyne Imaging
  • Chris Beynon, CoaXPress Chair and CTO, Active Silicon
10:45 am - 11:15 am ET
Using Thermal Cameras for Elevated Body Temperature Screening
Markus Tarin, MoviTHERM
  • Vision & Medical Applications

Markus Tarin

President & CEO, MoviTHERM

Using Thermal Cameras for Elevated Body Temperature Screening

Thermal cameras are being deployed in record time for screening of elevated body temperature at airports and other public places. However, as useful and capable as this technology is, there is a lot of misinformation circulating in the news. As with the implementation of any technology, there are challenges. It is important to understand the physics involved that affect the accuracy of an optical temperature measurement, but moreover, there are many other factors to consider related to bio-physical phenomena. This talk explains how thermal imaging technology can be used to screen for people with an elevated body temperature.

11:30 am - 12:30 pm ET

KEYNOTE: The State of the Vision Industry Executive Roundtable

  • Vision & Medical Applications
 

KEYNOTE: The State of the Vision Industry Executive Roundtable

Join us as we interview leading executives on the current state of the machine vision industry and the opportunities and challenges of the COVID-19 pandemic. AIA Vice President Alex Shikany will moderate this insightful round table. Attendees will have the opportunity to ask these executives questions during the live discussions.

 

Panelists include:

12:30 pm - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Advances in Unit-level Traceability in Life Science Industries

John Agapakis, Omron Automation Americas
  • Vision & Medical Applications

John Agapakis

Director Business Development, Traceability Solutions

Advances in Unit-level Traceability in Life Science Industries

The breadth and scope of traceability has expanded significantly over the years along with advances in technology, making it a ubiquitous and critical application in today’s world-class manufacturing operations and especially important in life science industries, such as medical device manufacturing, pharmaceutical packaging and clinical diagnostics instrumentation and lab automation. In addition to the economic justification of implementing unit-level traceability and the liability implications of mislabeling or recalling products, regulatory agencies and standard setting organizations have also introduced regulations requiring unit-level traceability, such as FDA’s UDI (Unique Device ID) and DSCSA (Drug Supply Chain Security Act) serialization mandates.

 

In this presentation, we will discuss both the internal justification and benefits of a traceability implementation as well as externally imposed mandates and regulations. We will also explore the evolution of traceability, and explain why the latest phase, which we refer to as “Traceability 4.0”, is not just about tracking products and components but also about optimizing productivity and quality by tying product to process parameters.

 

Regarding some of the specific automatic identification technologies used in traceability applications, we will review the applicability and relative advantages of imaging technologies such as 1D bar codes printed on labels and 2D codes directly marked on parts (DPM) as well as the complementary, non-line-of-sight technology of RFID. We will also discuss the MVRC (Mark-Verify-Read-Communicate) end-to-end traceability deployment methodology that helps ensure robustness in a traceability implementation, and will specifically highlight new integrated machine vision solutions for in-line inspection and print/mark quality verification for every label or part as it is being printed or marked.

2:15 pm - 2:45 pm ET

Diagnostic Imaging and Functional Technology to Detect Acute and Chronic Marijuana Impairment

Dr. Denise Valenti, IMMAD
  • Vision & Medical Applications

Dr. Denise A. Valenti

CEO/President, IMMAD, LLC

Diagnostic Imaging and Functional Technology to Detect Acute and Chronic Marijuana Impairment

The visual system was studied in one of the largest collection of quality research on human performance with marijuana. The work was done by optometrists from the University of Berkeley in the early 1970's. This research and more contemporary work using electrodiagnostic techniques and complex visual fields will be discussed as well as future approaches with imaging.

3:00 pm - 3:30 pm ET

How to Build FDA Approved Medical Imaging Applications

Darcy Bachert, Prolucid Technologies Inc.
  • Vision & Medical Applications

Darcy Bachert

CEO, Prolucid Technologies Inc.

How to Build FDA Approved Medical Imaging Applications

Most companies looking to build medical imaging applications will need to go through the process of FDA approval. In this presentation we look at some of the key differentiating factors that make medical-grade applications unique, and approaches you can take to streamline regulatory approvals, and ultimately simplify and accelerate your path to market.

3:45 pm - 4:15 pm ET

Imaging Solutions to Provide Zero Fault Inspections in the Biomedical and Pharmaceutical Industry

Luca Bonato, Opto Engineering
  • Vision & Medical Applications

Luca Bonato

Product Manager, Opto Engineering

Imaging Solutions to Provide Zero Fault Inspections in the Biomedical and Pharmaceutical Industry

Nowadays biomedical and pharmaceutical industries must fully comply with increasingly demanding standards especially concerning zero fault policies. The recent COVID-19 wave contributed to push even harder on the request for increased speed and quality assurances.

 

For such sectors, machine vision plays an essential role in implementing 100% products inspection. For example, in the manufacturing process of vials it is essential to avoid contamination: this is achieved by checking for crimping of aluminum seal, cracks and scratches on the glass surface, missing flip-off, vial integrity etc.

 

Reliability of the inspection system is therefore critical: this is why a smart solution can be implemented by choosing wisely the optical components for the system, saving both space in the manufacturing line and processing time in software algorithms.

Thursday, MAY 21, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Real World Challenges for AI in Vision Applications

Dany Longval, Teledyne Imaging
  • AI & Machine Learning in Vision

Dany Longval

Vice President of Sales, Teledyne Imaging

Real World Challenges for AI in Vision Applications

Dany Longval, Vice President of Sales for Teledyne Lumenera, will address how advancements in artificial intelligence, particularly in deep learning, have accelerated the proliferation of vision-based applications.

10:45 am - 11:15 am ET

Deep Learning for Quality Inspection: 2 Case Studies

Stephen Welch, Mariner USA
  • AI & Machine Learning in Vision

Stephen Welch

VP of Data Science, Mariner USA

Deep Learning for Quality Inspection: 2 Case Studies

Despite making significant investments in machine vision systems to detect and classify defects, manufacturers experience high false positive rates. While machine vision systems may include high quality optics, effective lighting and high-resolution image capture, their defect detection methods are incapable of the accuracy of modern deep learning technology. As a result, human effort is required to compensate for the system’s inaccuracy. High false positive rates also prohibit further robotic automation. In this presentation, Stephen will compare traditional machine vision technology with deep learning . He will share two case studies from the automotive industry where a real-time deep learning vision system was used to improve existing machine vision systems’ accuracy. He will show how he improved visual inspection accuracy by 20x – 30x for these companies by using a the cloud and a ResNet-based architecture. Stephen will also share how the effort fits into the broader context of deep learning, address the specific complexities of building, deploying, and maintaining deep learning based systems on the edge in manufacturing environments, and detail how these process improvements drive significant ROI for manufacturers.

11:30 am - Noon ET

Accelerating AI Camera Development

Quenton Hall, Xilinx
  • AI & Machine Learning in Vision

Quenton Hall

AI System Architect, Xilinx

Accelerating AI Camera Development

Fueled by a trifecta of rapid advances in network training, big data, and ML research, so-called "Deep Learning" is rapidly becoming mainstream, especially in embedded vision applications where the end game is teaching machines to "see". However, Convolutional Neural Network (CNN) inference is computationally expensive, requiring billions of operations per inference. Moreover, many critical applications require extremely low latency and must support high frame rates. Given these constraints, and given a need for sub-10W power consumption, high-reliability, security, and product longevity, how do we design an integrated camera which can provide the required levels of ML inference performance?

Noon - 1:30 pm ET
BREAK
1:30 pm - 2:00 pm ET

Embedded Learning and the Evolution of Machine Vision

Jonathan Hou, Pleora Technologies
  • AI & Machine Learning in Vision

Jonathan Hou

Chief Technology Officer, Pleora Technologies

Embedded Learning and the Evolution of Machine Vision

One of the largest influences on the future of embedded vision is artificial intelligence and machine learning, where complex systems can learn from collected data and make decisions with little to know human intervention. In this presentation, Jonathan Hou will discuss the trend towards machine learning for vision and sensor networking applications. This will include a comparison of traditional vision inspection and the advantages of AI and machine learning, the evolution of embedded platforms for vision, and an overview on how advanced sensors including hyperspectral and 3D can be used to augment inspection and detection applications. In particular, the presentation will focus on how system designers and developers can integrate and leverage “plug-in” machine learning and AI capabilities within existing applications as an evolutionary path towards fully networked Internet of Things and Industry 4.0 applications.

2:15 pm - 2:45 pm ET

How Machine Vision is  Enabling Smart Manufacturing

Will Healy III, Balluff
  • AI & Machine Learning in Vision

Will Healy III

Industry Marketing Director, Balluff

How Machine Vision is  Enabling Smart Manufacturing

As we are swept into the fourth industrial revolution, you want to be a company that comes out on top; but at the current pace of change, are we doing the right things? Are we using the right technology? With case studies and articles, we will explore why manufacturers big & small are investing in the Industrial Internet of Things (IIoT), break down the basics of smart manufacturing and discuss the key role machine vision plays in enabling this revolution. With a look to how guidance (VGR), inspection, gauging and identification applications are creating an Industry 4.0 factory, we will offer simple actions you can make today to start enabling your factory for flexible manufacturing & efficient production, and you will leave empowered with confidence in machine vision as the enabling technology for your next smart manufacturing project.

3:00 pm - 3:30 pm ET
Computer Vision in the Time of COVID
Eric Danziger, Invisible AI
  • AI & Machine Learning in Vision

Eric Danziger

Industry Marketing Director, Invisible AI

Computer Vision in the Time of COVID

As COVID-related factors disrupt operations and create unforeseen challenges, learn how computer vision can keep business and manufacturing moving. Increased attrition combined with travel restrictions has disrupted access and visibility across businesses. With the rapid maturation of computer vision over the past couple of years, intelligent camera systems can solve these problems. Quality engineers can perform root cause analysis, supervisors can provide spot-training, and managers can get necessary data and insight to optimize their operations all remotely. Learn how computer vision can mitigate these new challenges and improve productivity, efficiency, and safety across your workforce.

Friday, MAY 22, 2020
No events scheduled this day for the selected track.
10:00 am - 10:30 am ET

Machine Learning-enabled Robotics Vision in Warehouses and Factories

Bastiane Huang, OSARO

  • Vision & Robotics

Bastiane Huang

Product Manager, Osaro

Machine Learning-enabled Robotics Vision in Warehouses and Factories

Machine learning has enabled a move away from manually programming robots to allowing machines to learn and adapt to changes in the environment. We will discuss how machine learning is currently used to enhance robotics vision and allow robots to be used in new use cases for other industries such as warehouse, manufacturing and food assembly.
 

We will also describe recent progress in deep learning, imitation learning, reinforcement, etc. and discuss the real world requirements and challenges of various industrial problems, pipe lined versus end to end systems, and the technology that companies in this space have developed as they address the challenges in robotics vision.

10:45 am - 11:15 am ET

Advances in Omnidirectional Cameras and Its Impact on Robotics

Rajat Aggarwal, DreamVu

  • Vision & Robotics

Rajat Aggarwal

CEO, DreamVu Inc

Advances in Omnidirectional Cameras and Its Impact on Robotics

Automation is changing the global landscape by redefining the way we have imagined machines and what they can do for us. The future of our warehouses, industries, homes, transport systems, healthcare will be very different from what we currently see. The need for complete autonomy has triggered a lot of innovation in the sensor world, depth sensing to be specific. We have seen many new players come to the market with different approaches towards creating ideal sensors. A lot of LiDAR companies have sprung up, in most cases, by demonstrating an incremental improvement over the incumbent. What seems missing is the focus on scalable and an efficient solution which can enable these autonomous systems with navigation and understanding in highly cluttered and dynamic environments. Currently, sensing is a bottle-neck for scaling. In this talk, we will discuss the advancements in omnidirectional cameras that capture 360-degree light-field of a scene and how can they bridge the gap between what current sensors are offering and what the end application demands.
 

Cameras for omnidirectional imaging typically use a few limited field-of-view lenses to span a large section of the spherical view around the camera centre. Camera rigs or mosaics that capture small segments and stitch 360-degree panoramas are also prevalent. Cameras with multiple sensors suffer from a range of issues related to high thermal output, complex synchronization protocols and compute extensive stitching. Other devices for omni-directional odometry use hyper-spectral and laser imaging components such as infra-red, Lidar and ToF cameras. These cameras, while more accurate in sensing the environment, suffer from the sparsity of captured information and limited temporal resolution due to moving parts. A single-sensor omnidirectional depth camera can alleviate all these problems comprehensively.
 

An ideal omnidirectional camera, with the capability of imaging the full spherical field-of-view around the camera, can significantly improve the machine vision capabilities. It can bring multifold advantages in terms of cost, compute, and power requirements over other competing sensing technologies. An ideal solution in this context should: 1) capture rich (RGB+D), dense, omnidirectional information in real-time without blind spots, leading to semantic scene understanding and 3D scene structure recovery, 2) use minimal number of physical sensors and avoid moving parts to reduce errors due to misalignment, synchronicity and thermal caps, and 3) have appropriate form-factor, weight and power requirements to enable easy integration into typical robots and drones of varying size and shape. Also, the sensor should be computationally self-sufficient. These requirements pose severe challenges to the design of omnidirectional cameras. Such a camera can significantly alter the landscape of the automation industry especially robotics. Unlike autonomous vehicles that use GPS, indoor robots rely solely on sensors to navigate around their environments. To date, unmanned rovers moving cartons around the warehouse have been weighed down with expensive sensors to capture its spatial understanding within the facility to avoid collisions with humans and other equipment. From full 360° localization and mapping to uniform situational awareness and dynamic response, these cameras can enable uninhibited interaction and interoperability between humans and robots.

11:30 am - Noon ET

The Biggest Challenges of Bin Picking

Jan Zizka, Photoneo

  • Vision & Robotics

Jan Zizka

CEO, Photoneo s.r.o.

The Biggest Challenges of Bin Picking

3D vision-guided automation is still a new field of industry which means that we constantly need to face newly emerging challenges. This presentation will cover the biggest challenges of bin picking as we know it today by trying to provide answers to questions that shape the direction of the latest advances in the field. Has the development of 2D and 3D technology reached the peak or, on the contrary, is there still room for major advancements? How much fine-tuning do 3D machine vision systems need to undergo to make robot performance 100% reliable? This requires a perfect interplay between high scan accuracy, resolution and scanning speed. To what extent are the developers of 3D machine vision solutions able to meet these requirements and what does the current market have to offer?


From a general perspective, all 3D cameras and 3D scanners available on the market are based on technologies that can be divided into three main categories: time-of-flight, stereo vision and technologies based on emitting a structured light pattern. How do they differ and which one is most effective? Though we may lay down multiple distinction criteria to differentiate between various 3D sensing techniques, the most meaningful one for the next-generation automation seems to be the ability to scan moving objects in sufficient quality. Here comes into play a new, 4th method - the revolutionary Parallel Structured Light technology patented by Photoneo, which is based on a specially designed CMOS image sensor with a mosaic shutter. This method allows Photoneo’s 3D camera MotionCam-3D to scan objects in rapid motion. In what way is this method so breakthrough and how does it change the traditional concept of the possible in 3D imaging? The growing importance of and advances in high-quality image acquisition of 3D scenes is inextricably linked to the rise of industrial automation and robotisation. Thanks to the advancements in robot vision, automation of manufacturing processes has entered a completely new dimension. Yet the complexity of applications that need to be automated poses increasingly difficult challenges to the machine vision. The most common bin picking use cases include applications in industrial production, palletisation, de-palletisation and robotic manipulation. Gradually 3D machine vision crosses the borders of industrial production and enters spheres such as the food industry.


The field of AI and Machine Learning is also moving forward in giant leaps and, in combination with 3D machine vision systems, finds an increasingly wide array of industrial applications. What methods are there for CAD-based matching on the one hand and picking of unknown items on the other, and where are they currently used and might potentially be used in the future? Another important feature in the context of industrial automation is path planning as it is necessary for autonomous robot performance. Where does its development currently stand and how many companies rely on this robotic “ability”? And finally, innovations in the field of industrial automation in general and bin picking solutions in particular include new approaches to grasping methods as well as efforts to shorten the cycle times of object picking and placing. Which advancements in this area can we already enjoy and what remains subject to improvement? Answers to these questions come in the form of an insightful overview of current trends in the development of 3D machine vision technologies and solutions applied in bin picking applications.

Noon - 1:30 pm ET

BREAK

1:30 pm - 2:00 pm ET

Building Advanced Robotics Applications Quickly: Vision Sensor Integration with Robot Operating System

Katherine Scott, Open Robotics

  • Vision & Robotics

Katherine Scott

Developer Advocate, Open Robotics

Building Advanced Robotics Applications Quickly: Vision Sensor Integration with Robot Operating System

Robot Operating System (ROS) is a collection of free and open-source software packages used by a large and growing developer community to build, simulate, and test robotic systems. This community is rapidly growing and includes a number of Fortune 500 companies, autonomous car and truck companies, government entities and universities. According to ABI Research, 55% of robots shipped in 2024 will include at least one ROS package [1]. Just as Linux overtook proprietary vendors in the cloud computing market, ROS is poised to supplant closed source systems in the development of advanced robotic applications. Despite this, vision sensor support for ROS remains ad-hoc; only a handful of vendors support official ROS packages. Not only does this lack of support slow the process of research and development, it makes application development more difficult for the end-user as well as exposing them to higher risks in their deployment. In this talk we will cover the basics of ROS; what it is, how it works, and what it is used for. Specifically, we will show how ROS can be used to quickly simulate an imaging sensor and benchmark its performance in an application. Following from this simulated environment we will show how the simulation environment can be easily ported to actual hardware, calibrated, and then integrated into a more complex application. Following from our toy example, the talk will then cover existing ROS 1 and ROS 2 vision capabilities, and the packages currently maintained by the community. This portion of the talk will discuss how vendors or users who wish to contribute open sensor drivers can properly configure their source code repositories for the best out of the box experience and rapid adoption into the ROS community. Moreover, we will discuss how a good vision sensor package makes it possible to rapidly develop complex computer vision controlled robotics applications. [1] https://www.bloomberg.com/press-releases/2019-05-16/the-rise-of-ros-nearly-55-of-total-commercial-robots-shipped-in-2024-will-have-at-least-one-robot-operating-system-package

2:15 pm - 2:45 pm ET

High-Speed Bin Picking via Commercial Cameras and AI

Paul Thomas, PE, Procter and Gamble
  • Vision & Robotics

Paul Thomas, PE

Section Head - Applied AI and Machine Vision, Procter and Gamble

High-Speed Bin Picking via Commercial Cameras and AI

This presentation will take you on a journey, demonstrating highly accurate and high speed pick and place of consumer goods. Through the use of commercially available hardware and in-house Deep Learning software, travel through concept, data collection, training and execution. See some secrets from under the hood enabling this idea to become reality.

3:00 pm - 3:30 pm ET

The Future of Robot Safety: From Collaborative Robots to Collaborative Applications through Advanced Vision

Clara Vu, Veo Robotics
  • Vision & Robotics

Clara Vu

Co-founder and CTO, Veo Robotics

The Future of Robot Safety: From Collaborative Robots to Collaborative Applications through Advanced Vision

When people imagine what factories will look like in the future, many of us picture a “lights out factory” with machines humming all day and night with no people in sight. But transforming factories, particularly to make them more flexible, will mean physically bringing together humans and robots. The most flexible machine in a factory is a robot and the most flexible resource is a human. Industrial robots are powerful, precise, and repeatable, but they don’t have the flexibility, intelligence, dexterity, and judgment of humans, and they aren’t going to any time in the foreseeable future. The best way to make manufacturing flexible is to let robots and people work closely together, each doing what they do best.

 

Popular perception of industrial collaborative robot systems centers on the robot itself, which is often a particular type of robot called Power and Force Limited (PFL). However, PFL is only one means of achieving safe collaboration, and it only addresses a subset of the risks involved in collaborative applications. Another approach that is growing in popularity is Speed and Separation Monitoring (SSM), which addresses some of PFL robots’ shortcomings.

 

Collaborative applications using SSM have fewer limitations on end effector design, robot speed, and payload. However, their implementations increase the complexity of the overall system because they require the integration of advanced 3D vision sensing systems and the computation of protective separation distances. Future intelligent vision sensing systems must reduce the burden of calculations on the integrator, providing a holistic approach to workcell safety.

 

This talk will cover the possibilities of flexible manufacturing that human-robot interaction can enable and the technical and robotic vision challenges that it raises, and what it means to create a safe collaborative workcell. We will discuss how Veo Robotics is addressing these challenges using advanced 3D safety-rated Time-of-Flight vision technology for Speed and Separation Monitoring. The talk will conclude with an examination of the impact that human-robot collaboration will have on manufacturing from flexible factory to continuously adaptive factory.

3:45 pm - 4:15 pm ET

Random SKU Depalletizing Using Vision and AI

Bryan Knott, ABB, Inc.
  • Vision & Robotics
 

Bryan Knott

Logistics Business Line Manager, ABB, Inc.

Random SKU Depalletizing Using Vision and AI

For this presentation we will describe the problem and the offered solution - using proprietary camera and AI system to allow the robot system to unload pallets without needing to be taught the box sizes or pallet patterns.

REGISTER NOW

SPONSORS

Advanced Illumination Allied Vision
Euresys IDS Imaging Development Systems, Inc.
Matrox Imaging Phase 1 Technology Corp.

Teledyne Imaging

 

 

MEDIA PARTNERS

Assembly Magazine Imaging & Machine Vision Europe MVPRO
Quality Magazine Vision Spectra Vision Systems Design

 

 

REGISTER NOW
Close Speakers

Who's Speaking

Click on a name to learn more

SPEAKERS
Rajat Gupta

Keynote

Rajat Gupta
Director, Business Development - AI and MR
Microsoft Corp.
Andreas Platz Andreas Platz
Product Manager Machine Vision
Sill Optics GmbH
Andrew Long Andrew Long
CEO
Cyth Systems
Bastiane Huang Bastiane Huang
Product Manager
OSARO
Bryan Knott
Logistics Business Line Manager
ABB, Inc.
Clara Vu Clara Vu
Co-founder and CTO
Veo Robotics
Dany Longval Dany Longval
Vice President of Sales
Teledyne Imaging
Darcy Bachert Darcy Bachert
CEO
Prolucid Technlogies Inc.
Dave Spaulding Dave Spaulding
President & CEO
Smart Vision Lights
Dr. Denise A. Valenti Dr. Denise A. Valenti
CEO/President
IMMAD, LLC
Dr. Dietmar Ley Dr. Dietmar Ley
CEO
Basler
Dr. Kamel Saidi Dr. Kamel Saidi
Group Leader
National Institute of Standards and Technology
Eric Danziger Eric Danziger
Founder and CEO
Invisible AI
Georgy Das Georgy Das
Technical Training Manager
Midwest Optical Systems
Gregory Hollows Gregory Hollows
Vice President
Edmund Optics
Jake Sigmond Jake Sigmond
Application Sales Engineer
FLIR Systems
Jan Zizka Jan Zizka
CEO
Photoneo s.r.o.
John Agapakis John Agapakis
Director Business Development, Traceability Solutions
Omron Automation Americas
Jonathan Hou Jonathan Hou
Chief Technology Officer
Pleora Technologies
Katherine Scott Katherine Scott
Developer Advocate
Open Robotics
Luca Bonato Luca Bonato
Product Manager
Opto Engineering
Marc M. Landman Marc M. Landman
Senior Technical Advisor
CCS America, Inc.
Mark Lewandowski Mark Lewandowski
Robotics Innovation Technical Section Head
Procter & Gamble
Markus Tarin Markus Tarin
President & CEO
MoviTHERM
Martin H. Ettenberg Ph.D. Martin H. Ettenberg Ph.D.
President & CEO
Princeton Infrared Technologies, Inc.
Mattias Johannesson Mattias Johannesson
Senior Expert 3D Vision
SICK
Nick Sischka Nick Sischka
Manager of Sales Operations, Imaging
Edmund Optics
Paul Thomas, PE Paul Thomas, PE
Section Head - Applied AI and Machine Vision
Procter and Gamble
Quenton Hall Quenton Hall
AI System Architect
Xilinx
Rajat Aggarwal Rajat Aggarwal
CEO
DreamVu Inc.
Samuel P. Sadoulet Samuel P. Sadoulet
President and Chief Operating Officer
Edmund Optics
Sean Pologruto Sean Pologruto
Applications Engineer
Basler
Stephen Welch Stephen Welch
VP of Data Science
Mariner
Steve Kinney Steve Kinney
Director of Engineering
Smart Vision Lights
Steve Wardell Steve Wardell
Director of Imaging
ATS Automation and chairman of AIA Board of Directors
Will Healy III Will Healy III
Industry Marketing Director
Balluff Inc.