ONE EVENT. TWO DAYS.
THREE WORLD-CHANGING TECHNOLOGIES.
Collaborative robots. Advanced vision technologies. Artificial intelligence.
The automation industry is yet again facing a time of tremendous change and disruption. Don’t be left behind.
At this two-day Collaborative Robots, Advanced Vision & AI (CRAV.ai) conference, you will learn about these growing, dynamic technologies from experts in each field. You’ll connect with leading suppliers and integrators at the tabletop exhibition. And you’ll walk away with actionable insights to help advance your business.
Some of the highlights:
- Participate in our pre-conference training opportunities. Advance your vision and imaging knowledge with our Certified Vision Professional Program, or take part in collaborative robot safety training.
- Learn from our expert keynote speakers: Ken Goldberg (UC Berkeley), Matt Vasey (Microsoft), and Pieter Abbeel (US Berkeley).
- Choose between 30+ expert sessions in three tracks. See the full agenda.
- Check out 75+ exhibits from leading technology providers, suppliers and integrators. See the full floor plan.
- Connect with new business partners and industry peers at CRAV.ai’s networking events.
Register for as little as $895! Complete pricing
(Help us convince your boss Click here!)
LEARN. DISCOVER. CONNECT.
Customize your schedule to fit your interests! This conference will include three in-depth tracks with session on robotics, advanced vision and AI. These tracks will explore the latest innovations – and provide you with practical automation solutions.
- Learn practical solutions from industry leading experts
- Discover new opportunities in emerging markets
- Connect with peers and new potential business partners
- Score valuable insight that will help you improve business in these areas
- Experience these dynamic technologies first-hand and get answers to your questions on-the-spot
The Collaborative Robots, Advanced Vision & AI Conference welcomes:
- Engineering and manufacturing personnel seeking effective ways to reduce cost, improve quality, and advance productivity, while increasing flexibility
- Experienced users seeking new applications
- Those interested in learning how artificial intelligence is enabling advances in robotics and advanced vision technology
- Prospective users trying to determine if robotics, vision, and artificial intelligence makes sense for their companies
This dynamic conference will introduce you to the technologies, trends, challenges and people that are disrupting the status quo with revolutionary innovations.
Convince Your Boss
Download our letter template to send your boss – planning your participation early will save your company big bucks!REGISTER NOW
Reach hundreds of qualified professionals with a tabletop registration. Click here to register for exhibit space.
If you are interested in speaking at the 2018 Collaborative Robots, Advanced Vision & AI Conference, please submit the Call for Speakers form.
Pieter Abbeel, Professor and Director of the Robot Learning Lab, UC Berkeley
Pieter Abbeel, Professor and Director of the Robot Learning Lab
Recent advances in Artificial Intelligence have enabled a wide range of breakthroughs in many domains, including image recognition, speech recognition, learning to play video games, learning to control simulated and real robots, and mastering the classical game of Go. In this keynote session, Pieter will highlight these advances, provide some background on ideas and trends that have powered these advances, and shed some light on where AI is headed, with an emphasis on robotics.
Christopher McMurrough, Chief Technology Officer, Cloud 9 Perception
Christopher McMurrough, Chief Technology Officer
Cloud 9 Perception
Recent advances in optical sensing technologies have increased the performance, reliability, and cost-effectiveness of 3D data streams in industrial machine vision applications. Stereo cameras, Time of Flight (ToF) sensors, and structured light scanners providing RGB-D pixel data have made it possible to adopt high performance 3D graphics processing techniques that have, until now, been limited to tightly constrained laboratory environments. An overview of 3D sensing technologies and fundamental algorithms for extracting knowledge from scenes will be discussed.
Jason Fortune, Group Manager – Advanced Engineering Solutions, Applied Manufacturing Technologies
Jason Fortune, Group Manager – Advanced Engineering Solutions
Applied Manufacturing Technologies
Collaborative robots have emerged as the latest "silver bullet" solution to meet manufacturing challenges. While this technology opens new doors and creates excitement about the potential, it is important to consider the key factors that support the successful application of robotic automation with a positive impact to your manufacturing environment.
Dr. Mohsen Hejrati, CEO & Co-Founder, Clusterone
Dr. Mohsen Hejrati, CEO & Co-Founder
We will present deep learning applications such as voice recognition (i.e. Siri), natural language processing (i.e. Google translate), computer vision (self-driving cars), and recommendation (Amazon suggestions engine).
Colin McCarthy, Training Data Consultant, Mighty AI
Colin McCarthy, Training Data Consultant
Autonomous vehicle perception researchers around the world have collected terabytes of raw sensor data that must be organized and labeled to train their models. Efficiently sorting through that data to identify rare objects and scenarios for labeling is critical for time and cost savings when generating training data sets. It is equally important to understand the effects of annotation accuracy on model performance so that teams can optimize workflows for achieving high quality without expending unnecessary resources. In this session, we review methods and best practices for scaling your ground truth annotation program.
Bob Bollinger, Procter & Gamble
Bob Bollinger, Global Robotics Applied Innovation Leader
Procter & Gamble
At first glance many collaborative robot applications may appear simple and straightforward. This presentations will discuss the learnings and safety strategies required to implement a "simple" power and force limited (PFL) collaborative robot cell.
John Lizzi, Executive Director, Robotics , GE Global Research
John Lizzi, Executive Director, Robotics
GE Global Research
The Next Wave of Industrial Productivity: How GE is leveraging Robotics and AI to impact the industrial world.
Kenneth Funes, Co-Founder & CEO, Eyeware
Kenneth Funes, Co-founder and CEO
In order to achieve true collaboration, cobots will need to understand human attention. This will enable a natural human-machine interaction between the operator and the robot. Eyeware is presenting a novel approach to equip robots with 3D eye tracking capabilities using industrial depth sensing cameras. Attention sensing with 3D eye tracking will speed up interactions with the user, will allow cobots to pre-position themselves intelligently, and will disambiguate gaze towards objects in 3D space. Additionally, cobots will be able to instruct how to use a workplace and suggest improvements if the wrong object or surface is being paid attention to. A first proof of concept was demonstrated by Festo at Hannover Messe in Germany earlier this year, where Eyeware’s 3D eye tracking was integrated into their Bionic Workplace allowing an enhanced screen interaction with the operator (https://youtu.be/InHJ_Tvzl28).
Roberta Nelson Shea, Global Technical Compliance Officer, Universal Robots
Roberta Nelson Shea, Global Technical Compliance Officer
An MIT study based on research done at BMW plants found that humans & robots working together in a team can be around 85% more productive than teams made of either humans or robots alone. This is a groundbreaking finding which many forward-looking manufacturers are actively exploiting to keep their competitive edge. Typical applications of human-robot collaboration is in automotive assembly lines, electronic assembly (like laptops, cellphones) & food/consumer goods packaging. Cobots are increasingly being used for screwing, gluing, machine tending, polishing, painting, welding, and other assembly line operations.
George Babu, Co-Founder, Kindred AI
George Babu, Co-Founder
Building Intelligent Brains for Robots to Solve Real-World Problems in Complex Supply Chains: As growth in online shopping continues, retailers around the world are seeking ways that artificial intelligence can transform their supply chain operations in order to keep up with customer demand. George Babu, co-founder of Kindred AI, will discuss how artificial intelligence and robots are tackling the complex tasks, such as grasping, that were traditionally handled by humans.
Daniel Howe, Regional Development Manager - Americas, LMI Technologies
Daniel Howe, Regional Development Manager - Americas
Vision-guided collaborative robot systems are rapidly transforming production processes by making robots highly adaptable and easy to implement, while dramatically reducing the cost and complexity previously associated with the design and setup of fixed robotic cells. In this presentation, we will explore the role 3D technology plays in supplying these factory systems with the advanced vision, motion, measurement, and control capabilities required for key industrial robotic applications such as flexible quality inspection and pick-and-place.
Limor Schweitzer, CEO, MOV.AI
Limor Schweitzer, CEO
Within the next few years intelligent robots will perform most of the common physical tasks, which will free humankind to be more creative and productive, and enable faster market scalability. Thanks price drop of key automation hardware and accessible software, SMEs have now access to automation that previously was only afforded by the car industry. This technology will replace human operated machines with autonomous robots that work safely together with people and other robots in any environment at all scalable levels.
Steve Reinharz, Founder & President, Robotic Assistance Devices
Steve Reinharz, Founder & President
Robotic Assistance Device
Reinharz will discuss the commercialization of AI Technologies and process to convert ideas into products and services, drawing from personal experience.
Carlo Dal Mutto, CTO, Aquifi Inc.
Carlo Dal Mutto, CTO
Real-time object identification and defects recognition are fundamental building blocks of “Industry 4.0” and “Logistics 4.0”, the latest phases of automation in manufacturing and logistics. An intuitive solution for both problems consists in firstly constructing three-dimensional scale models of all the items in the considered inventory and then analyzing all such models by means of Deep Learning techniques. Within this session, an overview of the system architecture considered for tackling this problem is provided, with a specific focus on the benefits that 3D provides to Deep Learning, both in terms of reduction of required training samples as well as improvements in identification quality.
Samuel Bouchard, CEO, Robotiq
Samuel Bouchard, CEO
Whenever you ask if robots could work in your factory, the answer you receive is always a hesitant “It depends.” It depends on your factory, your team, which robot you choose, what you want it to do… and a whole lot more. Those who have deployed robots in their factories know too well what that’s about. Even the most enthusiastic robot adopters can find their projects plagued by malfunctioning communication channels—among humans as well as robotic components. How To Simplify Robot Cell Deployment? The answers can be found in lean robotics. Lean robotics is a systematic way to complete the robotic cell deployment cycle, from design to integration and operation. It will empower your team to deploy robots quicker and more efficiently than ever before. Whether you’re a manufacturing manager or engineer, if you’re ready to make robots work for you, this book will show you how.
Sheldon Fernandes, Sr. Software Engineer, Lucid
Sheldon Fernandes, Sr. Software Engineer
Today's multi-camera devices will be tomorrow's "seeing" machines. From 3D video cameras with holographic viewfinders, to robotic assistants that can thread needles--this presentation will draw similarities between human eyes and software solutions. It will also uncover the “sighted machine” revolution occurring behind the scenes, as well as the exponential ROI when AI and computer vision are able to replace structured light and time-of-flight depth sensors, which are too expensive for most consumer technologies.
Darcy Bachert, CEO, Prolucid Technologies Inc.
Darcy Bachert, CEO
Prolucid Technologies Inc.
Integration of cloud technology is becoming a critical component of many vision applications for a variety of reasons, including larger and more complex datasets, integration of machine learning and advanced analytics, and increased pressure for any possible competitive advantage. That being said there are many challenges with cloud integration, including data protection & security, technology selection and implementation, and design for occasional connectivity where the cloud is temporarily unavailable. This presentation will share information about some of the real-world examples for adopting cloud technology into vision applications, along with the challenges that will exist, and best practices and approach for tackling.
Patrick Sobalvarro, CEO and Co-Founder, Veo Robotics, Inc.
Patrick Sobalvarro, CEO and Co-Founder
Veo Robotics, Inc.
The next frontier for robotics is in durable goods assembly, which has seen multiple failed experiments in industrial automation to date. Now huge improvements are possible Title: The End of Fear, and How It Will Change Durable Goods Manufacturing Length of presentation: 45 minutes (including time for questions) Point 1: Even in the largest and most sophisticated manufacturers in the world, much of durable goods manufacturing is highly manual. Attempts at intensive automation in this sector have been made and have failed repeatedly over the past 40 years, most recently at Tesla Motors. In fact, full automation in final assembly tends to correlate inversely with profitability, quality and returns on investment in durable goods manufacturing. Instead, the most successful manufacturers are the ones who recognize and make use of the ingenuity of production workers. Contrary to popular perception and hype, current research in artificial intelligence is nowhere close to being able to replicate these abilities. The need for human ingenuity and flexibility is a consequence of the nature of the work and the products, and we will show how current market trends will only increase the gap. Point 2: "Automation assist" systems, particularly in the form of in-cycle human-machine interaction, can nonetheless lead to tremendous improvements in worker productivity, ergonomics, and capital asset utilization. In distribution centers and light manufacturing, we've seen the value of automation assistance with power and force-limited collaborative robots. But these robots lack the performance for most applications in durable goods manufacturing. Newer technologies are now making it possible for larger robots to work collaboratively and safely with humans. How will these new systems revolutionize durable goods manufacturing? Point 3: We will present patterns of application of collaboration in durable goods manufacturing, generalized from work in the field with major manufacturers. These patterns can be applied to many of the longest and most problematic process steps in durable goods manufacturing for large reductions in takt time, big increases in labor productivity, and greatly improved ergonomics. We'll introduce metrics for evaluating potential process improvements and calculating return on investment from these new approaches.
Scott Melton, Director of Sales & Execution, FANUC America
Scott Melton, Director of Sales - West Region
Bringing an industrial IoT solution into an existing facility can be a daunting task – determining what data to collect, how to analyze it, what protocols to use, and how to get multiple platforms to work and communicate together. This presentation will cover the basics, benefits, and best practices of data collection, as well as provide some solutions for collecting and analyzing data from equipment and machine tools with different platforms and protocols.
Matt Vasey, Director, AI and Internet of Things, Microsoft
Matt Vasey, Director, AI and Internet of Things
Keynote session description coming soon.
Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering, UC Berkeley
Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering
Consumer adoption of e-commerce is skyrocketing at Amazon, Walmart, JD.com, and Alibaba. As new super-sized warehouses are opening every month, it is proving increasingly difficult to hire enough workers to meet the pressing need to shorten fulfillment times. Thus a Holy Grail for e-commerce is robots that are capable of Universal Picking: reliably and efficiently grasping a massive (and changing) set of products of diverse shapes and sizes.
Eric Egan, Program Operations for the Radio-logical Source Recovery Program, Idaho National Laboratory
Program Operations for the Radio-logical Source Recovery Program, Idaho National Laboratory
This presentation will discuss the feasibility, application, and economy of applying collaborative robots (cobots) in lieu of traditional through the wall teleoperated manipulator systems, traditionally used within hazardous (radioactive) environments. Both fixed facility and mobile applications will be discussed. Both single- and dual-armed cobot systems will be employed. The six-axis, plus gripper, single-armed cobots will provide for machine vision enhanced pick-n-place applications within shielded rooms. Each six-axis, plus gripper, dual-armed cobot system will provide for machine vision enhanced pick-n-place applications and also include force-reflecting (haptic) teleoperations controls for non-routine materials handling operations. The dual-armed system will be mounted to a motorized stand with motorized adjustment both vertically and rotationally (around its base). To complement these cobots, both 2-D and 3-D vision technologies will be implemented. The vision technologies, in these applications, are used in lieu of traditional radiation shielding windows. 2-D video systems will provide for large area viewing, incorporating pan, tilt, and zoom capabilities. 3-D video systems will be mounted to the dual-armed robot stand and provide viewing, including depth-of-perception, to the operator, performing material handling and other tasks, from an operator workstation located outside the hazardous environment. The unique workstation used with the dual-armed systems will also be described.
Daniel Burseth, Vice President, Eckhart Automation
Daniel Burseth, Vice President
Why can some organizations successfully modernize their factories while others cannot? The state of Industry 4.0 technology far surpasses most organizations ability to evaluate, align, and ultimately implement technology at scale. The primary challenge in most Fortune 500 OEM environments is organizational in nature and in the context of a shrinking pool of internal technical talent. Daniel brings a perspective of designing and launching entire production lines at some of the largest assembly plants in North America to help explain why some factory modernization efforts succeed and others fail.
Russell Toris, Director of Robotics, Fetch Robotics
Russell Toris, Director of Robotics
1) As collaborative robots have entered the workforce, the industry is discovering that people who are working with robots want them to move in a very "human" way. Robots need to be able to identify things that humans are good at recognizing, such as other people and other automation equipment, so the robots can make natural movements to predict where the object or person is going and plan accordingly. Traditionally, engineers create motion-planning algorithms, which focus on optimization and efficiency. Without further semantic understanding of the environment around the robot, these algorithms can lead to jerky, unnatural movements when they encounter other people and automation equipment. Russell Toris of Fetch Robotics will lead a discussion on the current and future role that vision technology plays in modeling collaborative human-robot interaction.
Brendan Lelieveld-Amiro, Sales Engineer, Lumenera Corporation
Brendan Lelieveld-Amiro, Sales Engineer
Multispectral imaging (MSI) is a technology that combines visible and near infrared (NIR) imaging to extract information beyond what the human eye can see. Advances in sensors and filters are transforming the implementation of multispectral imaging solutions from expensive military and defense systems to affordable commercial systems. The most cost effective systems combine a RGB camera with a NIR camera and leverage image fusion techniques to create a highly detailed composite images. Applications are found in everything from medical imaging to precision agriculture. In automation applications, MSI can be used for food sorting, textile surface mapping, and printed circuit board inspection.
Peter Harris, CEO, HighRes Biosolutions
Peter Harris, CEO
Biopharmaceutical companies are some of the most sophisticated users of robotics, harnessing the power of automation to drive increased efficiency in therapeutic research. While laboratory automation has been developing for quite some time, the advent of collaborative robotics has fundamentally changed the architecture of robotic systems, opening up research areas previously not possible for industrial robots. In this session we will review how collaborative robots are impacting the nature of drug discovery, and touch on the symbiotic relationship between robotics, machine learning and artificial intelligence.
Dr. Andreas G. Hofmann, Head of the Boston R&D Lab, GreyOrange
Dr. Andreas G. Hofmann, Head of the Boston R&D Lab
A new generation of technologies and methods has opened up tremendous opportunities in the automation of supply chain. This session will highlight how end-to-end automation of order fulfillment in a distribution center can be maximised, especially for e-commerce and omnichannel distribution. By using the newest robotics systems that optimise machine learning and AI to collaborate with human operators, warehouse productivity can be maximised by 2 to 5 times.
Mariann Kiraly, Business Development Director, Vision Components
Mariann Kiraly, Business Development Director
Certain ideas and concepts are affiliated with the term Embedded Vision System, but what does it mean and where does it come from? In this presentation, we will examine the definition and key features that make up an embedded vision system and take a glimpse of its technological development until today. Understand the advantages of Embedded Vision Systems through real life application examples and gain insight into why Embedded Vision System are being deployed at an exponential rate for industrial, robotic, medical and ALPR applications.
Bernardo Mendez-Arista, Sr. Product Manager for Collaborative Robots, Yaskawa Innovation
Bernardo Mendez-Arista, Sr. Product Manager for Collaborative Robots
This session will discuss the role of collaborative robotics in the new paradigms of manufacturing and supply chains, particularly for the transition from centralized to distributed manufacturing. Artificial Intelligence and Machine Learning will also be covered as they play a pivotal role in the evolution of manufacturing since they will be the tools that allow the optimization of HRC and Human Robot Interaction (HRI).
Michael Bush, Field Director, Darktrace
Michael Bush, Field Director
Cyber security is an almost impossible problem to solve. This is particularly true in industrial environments, which have long faced advanced attackers -- from corporate espionage to state-sponsored threats. As IT and OT environments converge and with the growth of industrial IoT, perimeter defenses and airgapping simply aren’t enough anymore. While total prevention of compromise is untenable, utilizing automated self-learning technologies to detect and autonomously respond to emerging threats within a network is an achievable.
Shaun Edwards, CTO, PlusOne Robotics
Shaun Edwards, CTO
New vision technologies hold great promise for robotics in unstructured and dynamic environments. Logistics is a large, and growing, market segment which stands to benefit significantly from increased automation. The intersection of new vision technologies with robotics in logistics applications provides an exciting and potentially lucrative opportunity. This presentation discusses these new technologies, assesses their risk and potential for specific logistics applications.
Rick Faulk, CEO, Locus Robotics
Rick Faulk, CEO
The economics of robotics in the warehouse will address the various aspects of the growing e-commerce industry that have made autonomous, mobile robotics a critical solution for retailers/3PL's looking to remain competitive and meet customer demand effectively.
Mariano Phielipp, Senior Deep Learning Data Scientist, Intel
Mariano Phielipp, Senior Deep Learning Data Scientist
The algorithmic breakthroughs from the AI community bring a number a new opportunities for the Cobotos Development Ecosystem. New actuators based on improved Visuomotor Policies. Generalist Robots that will learn fast, more general and longer tasks. Industrial push for update Open source tools like ROS industrial and Startup contributing and soon to play a role in the ecosystem like : Covariant.AI and MOV.AI. An finally an fast improve SW ecosystem from the cloud providers and professional level Deep Learning frameworks like TensorFlow.
Patrick Cutler, Research Scientist, Teledyne Scientific & Imaging
Patrick Cutler, Research Scientist
Teledyne Scientific & Imaging
Deep Learning techniques have revolutionized robotics and machine vision systems, yet they still require massive amounts of labeled training data and are brittle when presented with new objects to recognize. In this presentation, we will discuss how Teledyne is developing machine vision solutions that adaptively exploit information from multiple sensors and modify its decision criteria to achieve improved recognition performance. We present recent efforts aimed at reducing the required amount of labeled training and implementing effective continual learning algorithms. We will present examples of some of our achievements in challenging environments involving poor lighting (nighttime driver vision), occlusions, limited training data and hard-to-characterize objects such as part defects and cables.
Michka Tosan, Applications Engineer, KUKA Robotics
Michka Tosan, Applications Engineer
One of the most exciting developments in the automation field, collaborative robots offer companies the dynamic flexibility to handle a wide range of applications across various industries. While they have exciting potential to increase production and efficiencies in different areas, collaborative robots are still in their relative infancy. This presentation will provide attendees with working definitions and requirements, as well as actionable advice, on how to safely adopt collaborative robots into their working environment. Going beyond the traditional applications and benefits of the manufacturing setting, Mr. Tosan will explain how collaborative robots are being utilized in other industries. He will also share how autonomous ground vehicles (AGV), with advanced collision prediction and avoidance, and real-time perception and sensor fusion offer exciting possibilities to work with and improve the safety of collaborative robots going forward.
Derik Pridmore, Co-founder and CEO, Osaro
Derik Pridmore, CEO
Derik will discuss the real world requirements and challenges of various industrial problems, pipelined versus end to end systems, and the technology Osaro has developed as it addresses the challenges in industrial robotics.
Jeremy Fishel, CTO, SynTouch
Jeremy Fishel, CTO
The sense of touch is essential to human dexterity. Human hands that have lost the ability to feel are slow, clumsy, and uncoordinated. Robots lacking the sense of touch suffer the same limitations when operating outside of a fully-defined environment. We propose that collaborative robots seeking human-like dexterity, speed, and performance would benefit from biologically-inspired strategies incorporating touch. In this talk, we will discuss our experience developing human-like tactile sensors and applications of this technology in robotics to enable human-like dexterity and perception. In our studies to incorporate tactile reflexes into prosthetic hands, we have found touch to be more useful than vision for dexterous tasks, and that touch combined with vision allowed for performance that neared that of unamputated hands -- benefits we predict will translate to collaborative robotic performance if they take advantage of robust and reliable tactile sensing.
Gal Inbar, President, iCobots
Gal Inbar, President
Lessons Learned Through "Hands On" Deployment of Smart Cobots Smart Cobots are some very different creatures than traditional robots or regular cobots, and deploying them requires a different business model and skill sets. In my presentation I would present several case studies and discuss insights regarding aspects such as sales process, POC & feasibility studies, FAE skillset, customer investment risk management, project management, "ownership" on shop floor, and preparing people to work with cobots. I would also discuss smart cobots adaptability to the everchanging tasks on the production floor and how to best assist the deployment teams to continue the required support internally.
Rahul Chipalkatty, Founder & CEO, Southie Autonomy
Rahul Chipalkatty, Founder & CEO
In this session, the speaker will examine the workforce gap and today's relationship between humans and (collaborative) robots. He'll also provide an overview of what the latest robotics and AI technology (both R&D and industrial) are available to help make our factories and logistics centers more productive.
Keynote Speaker Name, Keynote Speaker Title
In our closing keynote, a leading aerospace robotics engineer explores those questions and discusses some of the robots already working in orbit. Will future robots look like humanoids; will they be squat, round androids; or could they be a little of both?
Keynote Speaker Name, Keynote Speaker Title
Attendees will have the opportunity to visit Fetch Robotics' Center of Excellence -- one of the largest and complete warehouse and logistics demo facilities in North America. Busing and hors d'oeuvres will be provided.
PICK THE PASS THAT WORKS FOR YOU.
Customize your learning experience with our several registration options designed to fit your needs. Pre-Conference Training passes will also be available for Collaborative Robot Safety Training and AIA’s Certified Vision Professional (CVP) Basic level courses.
Reach hundreds of qualified professionals with a tabletop registration. If you are interested in exhibiting at this event, click the link below and fill out the form to be notified when registration goes live.
|Members (AIA, MCMA, or RIA)||$1,095|
**Includes access to keynote sessions, conference tracks, all networking functions, admission to table top exhibits, and all meals
|Exhibit Hall Pass*||Price|
|One Day Pass||$100|
|Two Day Pass||$175|
*Includes access to all networking functions, admission to table top exhibits, and all meals
PRE-CONFERENCE TRAINING OPPORTUNITIES
|Certified Vision Professional (CVP) at CRAV.ai Rates||AIA, MCMA, or RIA Members||Non-Member|
|Basic Courses Only (5 courses over 2 days)||$575||$675|
|Basic Courses + Exam (5 courses over 2 days, Basic Exam & 1 Free Retake)||$750||$850|
|Basic Exam Only (Includes 1 Free Retake)||$295|
|Advanced Exam Only (Includes 1 Free Retake)||$495|
Convince Your Boss
Download our letter template to send your boss – planning your participation early will save your company big bucks!
*Discounts are available to member companies – Become an RIA member today and take advantage of the benefits right away!
*Tabletop registrations include admittance to the conference for one (1) person. Additional conference passes can be purchased at the current market rates.