ASK THE EXPERTS
More Answers From Perry West
I want to use line scan camera to scan (in colour) with the resolution about 300dpi with object speed about 150 meters/minute. How can i calculate the line rate suitable for my application?
Assuming you want nominally square pixels (ignoring the dynamic component caused by the part moving during exposure), your scan period is determined by the time it takes the material to move a distance equal to the horizontal pixel spacing. For you application, 300 dpi is a horizontal pixel spacing of .084 mm/pixel. The speed of the line, 150 M/min is 2500mm/sec. The scan frequency needs to be 29,762 scans/sec (a period of 33.6 usec/scan). Hope this helps.
We want to develop a grain sorting system for which we want to use a line scan camera. Please suggest a suitable camera and algorithm. The dealer is in India.
There are several good line-scan camera companies. On several projects, I've used the Dalsa line-scan cameras (I have no financial connection to them), but I would consider others, too, like E2v. The algorithm will most likely need to be custom developed. There is no "true" line-scan software package available; the ones on the market are modifications to area camera programs. While I could ramble on about line-scan algorithms, you gave far too little information to offer suggestions. There are many details needed like what are the sort criteria, how are the grains being presented, what is the production rate, and other system architecture details. This discussion would be too much to handle on this forum. You can contact me directly if you wish to get into greater details.
I am an engineering student San Jose State University and my group and I are currently working on our senior project. Our project consists of a pipe climbing robot with integrated machine vision to detect corrosion, cracks, and imperfections on pipes. We wanted to know if someone would be able to assist us in selecting a camera that would be suitable for our purpose. We would really appreciate your help. Best, Daniel
Daniel -- I am in the San Jose area and would be willing to spend a bit of time with you to help you work out your requirements and possibly identify the components you will need. Please contact me by return e-mail.
How do we develop a Camera Verification System, we have got AVT Cameras installed on our printing machines
Perhaps more explanation of what you are trying to verify would help. If you want to verify the camera settings, then either you need to go into the camera setup program and check all the settings and registers or you need to have a program written as suggested by a prior respondent. On the other hand, you may want to verify the quality of the image your camera is creating. If this is what you want, I have a white paper in draft form, "Benchmarking the Imaging". I also have a video, also in draft form, that covers the same material. Let me know if you are interested in either of these.
which is the best camera for an online egg sorter?
In the U.S.A., eggs are sorted by weight, not physical size. They are inspected (sorted) for defects. I will assume you are looking for defects. The type of camera depends very significantly on your requirements and on your part handling (how the eggs will be transported by the camera). It is possible to envision using either an area or a line-scan camera. The camera resolution depends greatly on how small a defect you need to find and on the size of your field-of-view. Several cameras may be needed side-by-side depending on your egg sorter and how many parallel lanes it has. Perhaps you can provide more information about your requirements.
We are trying to detect fungus and/or bruising on vegetables moving on a sort table at 87 feet per minute. I don't know if UV would detect this or some other camera. I am trying to interface it to my robot for picking it off the line.
Wayne -- most food inspection uses the green, yellow, and red visible wavelengths (almost never blue) as well as near infrared (NIR) and short wave infrared (SWIR). While UV might work, there are several reasons, safety being one of them, to avoid using it if other wavelengths will do the job. Any qualified vision engineer would want to test the imaging at the appropriate wavelengths before making a recommendation unless they have solved your identical problem before. There are a great many more questions that need addressing such as how the vegetables are moved on the sort table (vibration, gravity, conveyor, etc.) before making a recommendation for a camera. I would suggest writing out a preliminary specification before talking with people. I'd be happy to review the spec for completeness if you wished.
We are using a Yag laser welding a 1mm dia seam to retain a plug pressed into a hole. We have a camera mounted looking thru the laser focus lens. The camera is used for aiming the laser before cycling. Can we integrate an automated vision system which checks the weld for holes after welding and then signals the controller to mark either good or bad welds? Thanks, Dan
Dan -- very likely you can use the same camera for inspection. The reservation comes from not knowing the resolution of the camera and the size of weld voids you want to detect. There are other issues such as illumination that may need to be added to make the voids visible and the camera's interface. The latter issue, the interface, is probably easily solved. The former issue, about illumination, will be very critical.
How does NTSC resolution work? dots per inch (648x488) , TV lines (240 or 400), size of image on LCD to display image issue.
NTSC predates pixels. So, the resolution you asked about is not addressed by the standard. The standard does specify 480 to 488 horizontal scan lines -- now equal to rows of pixels. It also does address the video bandwidth which with some engineering gymnastics can be related to columns of pixels. Suffice to say, the VGA resolution of 640x480 using a single chip color image sensor is a good fit for the NTSC specification. (NTSC is a color standard; RS170 is the monochrome standard that preceeded NTSC.) The size of the image on an LCD screen depends entirely on the physical size of the screen, the "dot pitch", and the display driver that might interpolate values for display on the screen. In other words, there is no correlation -- the image can be any size.
What are the specifications for a Genlocked Camera? I have a camera and I am trying to make my manufacturer Genlock this camera and they need the specifications on how to Genlock a camera.
Genlocking is a technique for analog cameras usually conforming to video standards. It is accomplished by supplying a composite sync signal that may also have video data or which may be just the sync with the video stripped off. One camera or piece of equipment is the master and generates this sync signal. Its sync signal is distributed to other associated cameras. These other cameras phase lock to the composite sync signal. Although genlocking helps insure that all cameras are running at exactly the same frame and line rate, it does not guarantee anything about latency between any two pieces of equipment.
I am trying to image an object immersed in water in a clear plastic enclosure at high speeds. It is quite easy to image the object through the plastic enclosure. However, when we image the object through the water in the plastic enclosure, the image is blurred. We are hoping for a solution to this. Is there a water immersion lens that we can use and where can we get it? Any alternative solutions?
Objects immersed in water have been imaged clearly. As a scuba diver I see many extraordinary pictures taken in the water. Certainly the clairty of the water is critical, but I'll assume that problem, if it existed, would be obvious to you. It's also possible for water turbulance to be an issue, but, again, I believe you would see this yourself. Is it possible that you have light reflecting off the surface of the enclosure and reaching the camera's lens, and that would degrade the image.
Where can I get a large calibration grid, ie, one in the order of 1m x 1m, for use with Cognex cameras
As far as I know, there are none commercially available. However, from the artwork generated by the Cognex software, you can order a grid (black on clear mylar) from The PhotoPlot Store in Colorado. Tell Rich Sayer that I suggested him. I have purchased a number of targets from the from large (e.g., 1 meter square) to extremely small.
We have a Hitachi P-20B CCD camera that we are using for inspection purposes. We are using S-video output. In one of our systems this is being converted to VGA output and then displayed on a small (~5") display. In this case are getting a clean crisp display without pixelation as the parts we are inspecting move through the viewing field. In another application we are trying to take the S-video output from the camera directly into a 15" LCD via the S-video input but we are getting some pixelation issues. Do we simply need a smaller display or do we need to do something else?
The answer to your question rests with the differences in the display resolutions and whether or not the display incorporates a low-pass filter. My guess is that your 5 inch display is lower resolution than the 15 inch display and that it may include a low pass filter in the video path. In concept, you could add a low-pass filter in the video input of the 15 inch display. I'm not aware if anything like this is made, but you could search for it or try talking to someone that sells high-end surveillance equipment.
I am creating a bracket to hold the camera used by a robot in a parts picking assembly operation. Can you tell me if there is a standard camera mounting configuration/bolt pattern, or a "most common" bolt pattern in use by multiple manufacturers?
Jon -- As others have said, except for the usually single 1/4x20 threaded hole, there is no standard. The 1/4x20 thread, designed originally for tripod mounting of photographic cameras, is unsatisfactory for industrial use. The easy approach is to use an intermediate plate that can be customized for the specific camera mounting hole configuration but have a standard mounting configuration to fit to your robot end effector.
A recent machine vision application installed in one of our facilities have insufficiently accounted for ambient light in the manufacturing plant. I am looking for some generic template language or best practices for lighting that I can add to our equipment specifications to avoid this problem in the future.
Steve -- From your question, you obviously know the following that I'll restate as an introduction to my remarks. Considering light sensed by the machine vision system, you have the engineered illumination and the ambient light. Engineered illumination creates the signal; ambient light creates the noise. The problem, then is signal-to-noise ratio. To improve the ratio, you need to either increase the signal or decrease the noise or some of both. I don't have template language for you as each application is very unique. It's difficult to specify ambient light (e.g., intensity, direction, spectrum, etc.), and this can change for several reasons. I suggest your buy-off test incorporate a provision to compare the vision system's operation under different lighting conditions (e.g., lights off and lights on) and require that the performance not change by more than some modest delta.
I want to purchase a colored ring light for darkfield inspection of a white surface. The surface has a micro texture (not too different from sandpaper), it is this texture that I plan to inspect. How do I go about selecting a wavelength of light to improve detection on a white surface. With white light in near darkfield I get a specular reaction from the surface.
John -- Scott is right when he said that wavelength will likely not have any effect on the specular reflections. A lot depends on the microtexture. If it's truly irregular like sandpaper, there is not going to be any illumination angle that will eliminate all glints. If it's a manufactured texture about which you know the geometry, there may be an illumination angle or range of angles that will work to keep the specular reflection out of the camera. The challenge may be to get the illumination within this angle range across the entire field-of-view. The other option Scott offered is the polarizer-analyzer approach. Using a polarizer over the light source and an analyzer (2nd polarizer) over the camera lens, you might be able to tune the analyzer position to remove the glints. However, if the texture results in multiple reflections off the texture facets, even this might not work.
Which lighting system is preferable for acquiring the sheet metal images and also the image mosaicing is required?
A good question. Lighting creates contrast, and, in machine vision, that is our signal. However, contrast is the difference between two things. You mention sheet metal as one of the things, but did not tell us what you are trying to contrast against the sheet metal. Also we would need to know something about the surface characteristics of the sheet metal (e.g., polished, fine grained, galvanized, etc.). For mosaicing, I'm assuming you are working on a wide area with multiple cameras (images). That imposes some constraints about achieving uniformity. So, knowledge of the area being illuminated is also necessary.
Do you know what filters will work with a Navitar 59 LGM 601 lens? I am looking to cut down on the amount of light with ND filters or Polarizing filters.
Jessica -- Do you know the filter thread for the lens? It should be on the data sheet or available from tech support at Navitar. I would expect it is a standard size. You can order the neutral density filters in threaded mounts from Midwest Optical Systems. I've used their products on a number of applications and been pleased with the quality and service.
How could I choose a lens according my cameras, like focal length, lens mount?
Amy -- First -- the lens mount is determined primarily by the size of the camera's image sensor. The JIIA has developed a guide for manufacturers and users in selecting lens mounts. You should be able to find information on the AIA Vision Standards page. Second -- determine the magnification (M) you need. This is the image sensor height (or width) divided by the field-of-view height (or width). Third -- use your preferred working distance (lens to scene distance) as a good estimate of the object distance (D). Fourth -- calculate your lens' focal length (F) using the formula F = D*M/(1+M). Fifth -- find a lens with a focal distance close to what you calculated and then see if you can adjust your working distance to make it work for your need.
My name is Ken Schuler, I am working with Orion Tech to see if we can find the right combination of camera/sensor/lens for a project we are working on. We had a sales rep, Kiran Devkota, but his phone disconnected. Here are some of our requirements; Camera *monochrome, decent resolution sensor. *USB interface. *IR capabilities, unit will be operating in darkness under an IR source. *C or Cs mount depends on the type of lens. Lens *A focal point of 90mm. *A field of vision of 35mm at the 90mm. *A Depth of field of at least 6mm. Any help you could provide would be much appreciated. Thank you, Ken Schuler Noss4ra2@gmail.com 302.757.9544
Ken -- Your requirements are very straight forward except for "decent resolution" and USB (did you mean USB 2.0 or USB3 Vision?). I could work with you to define a camera and lens that will meet your requirements. Contact me if I can help. Perry West Automated Vision Systems, Inc. 408-267-1746
THE OPTIMAL WORKING DISTANCE I want to inspect an electronic board with the size of 100 mm x 80 mm, therefore I would like to have a Field of View of 128 mm x 102 mm. The camera that i chose has a sensor with the size of 6,784mm x 5,427 mm (This size is calculated from the resolution and the pixel size of the camera). I want to find the optimal working distance for this camera, which can be calculated with the following formula: Focal length = Working distance * (sensor size/Field of View). If a fix the working distance at 250mm, I shall get the following calculation: Focal length = 250 mm * (6,784mm/128mm) = 13,25 mm. If the working distance changes, the focal length changes too. This make me wonder if there is an optimal choice of the working distance and the focal length so that I can obtain a better image quality with a non-expensive lens ? Some lenses are characterized by the Min Operating Distance (MOD), what will happen if I take a picture from a distance smaller than the MOD ? Thanks for any anwser.
As a rule, if you use lens extension tubes to focus closer than the MOD, there is a risk of degradation of lens resolution -- you may experience a loss of sharpness in the image. Some lenses can be extended up to 50% of their focal length (e.g., using a 25mm extension on a 50mm lens) without serious degradation. Other lenses will be seriously degraded when used closer than the MOD. As a technical point, the focal length of a lens does not change as you change the focus, only the image distance changes. As for an optimal working distance, I'd recommend a working distance between the field-of-view diagonal and two times that diagonal. If you need to be closer due to mechanical limitations, then the first choice would be to look for a macro lens that meets your requirements. A second choice (less desirable), is to fold the optical path to fit the available space using mirrors.
I am considering to build a stereo system for my research project, and would like to know whether a number of GigE cameras can be connected in practice over the same Ethernet interface and send data simultaneously without significant loss of performance. I know that in Firewire cameras isynchronous transfers are supposed to have guaranteed bandwidth and so on, however I have neither direct experience with GigE cameras nor access to the standard.
The short answer to your question is mostly yes, cameras can send data simultaneously over GigE ethernet. As you probably realize, ethernet protocol is asynchronous with handshaking, potential for collissions, and the ability to retransmit packets. The degree to which simultaneous performance will be degraded depends on the number of cameras connected, the image resolution of each camera, and the frame rate you are using -- in other words, the fraction of the total available bandwidth you are using. If you keep your total bandwidth utilization low, say below 30%, you should experience very few problems. If you are pushing bandwidth limits, consider a high-performance interface card with multiple ports -- each dedicated to one camera and serviced with its own on-card IP stack.
The machine vision interface in our entire product line is 1394a and we are faced with camera obsolescence & replacement. I am conducting a study of the costs and risks of changing to 1394b vs GigE or USB3.0. Changing to 1394b is the least risk and cost however, it has been said by certain resources that FireWire support will be non-existent in the near future (2-3 yrs.). I cannot find any factual information to support this, only “hear-say”. Can somebody please provide facts on the life of 1394b or point me to where I can get them? Thanks.
Michael -- From what I can tell, GigE is more popular than 1394b. Also, many manufacturers are introducing USB3 cameras, but I see few if any companies introducing new 1394b products. Clearly, the trend is away from 1394b. That support for (availability of) 1394b cameras will go away is obvious. Just when a specific camera model will be unavailable/unsupported is a matter of speculation -- even for the manufacturer. As long as there is a market for the camera, the manufacturer will continue to sell it. The more likely scenario is that the image sensor chip will become unavailable. Then, unless the market for 1394b cameras is strong, the manufacturer will direct their engineering toward more popular and profitable camera redesigns. I would recommend you give serious consideration to alternative interfaces (i.e., GigE or USB3).
What is the difference, as seen by a Bayer array camera, if any, of pure yellow and yellow produced by 255 G and 255 Red and 0 B?
Mike -- The simple answer is there will be no detectable difference. Pure yellow will also give a response on the red and green channels and little response on the blue channels. This will be true whether you are using a single chip camera with a Bayer filter or a three chip camera. However, if you were looking for the difference between a pure green and a mixture of yellow and blue, then the responses of the color channels might be very different. RGB is a very effective color space for imaging, but a very difficult space for color recognition where color is perceived color and not wavelength components. For perceived color discrimination it is more effective to convert from RGB into HSI or L*a*b* color spaces.
Dear Mr or Madam We have some projects and we want them to be PC base. Regarding to our studies, the best software's on this field are Vision pro and MVTec Halcon and NI vision Builder and Mil software from Matrox. we need a software with below specification: 1) Complete support of Multithread ( Parallel Programming) 2) Connecting directly to PC be via GigE , IEEE1394 or USB not Frame Grabber. 3) Licensing the software in a way to be able install the software on number of systems. 4) Software be equipped with tools to work on surface. We want to provide this solution. Is there any other software which can help us in this regard? What is your opinion in this regard and which software do you suggest? Wait for your kind reply. Best Regards,
Mehdi -- You have identified four of the top software packages. Any one of these is capable of covering many applications. Each of them is extensible -- if you find you need something special, you can add it. All of them offer run-time licenses. You will need to purchase a run-time license for each installation for any of these software products. If cost is critical, then CVTools is a free package. While extensive, CVTools is not as comprehensive or as efficient as any of the four software packages you named. Also, the support for this free package is much more limited.
I am looking for defect identification software in carbon steel piping. The images would be recorded on a SD card in a video camera. The camera is moved through the pipe via pressure and will travel up to 25 kms. We need the software to stabilize the video image and then find defects and estimate size. Defects will be pitting, erosion, wear.
Sean -- I may be able to help you develop this solution. Do you have a sample video? Also, do you have sample images of defects? Both those would be helpful for me to start discussing possibilities with you. Best regards.
I have an AVT GC-2450C camera. I need a simple software package to read camera setup parameters and begin grabbing frames at a specified rate and storing to disk, for perhaps a few hours until I tell it to stop. Can you recommend anything?
It sounds like AVT's Vimba viewer, available from their web site, will do what you want. If not, contact me, and I can help you with a simple program that will do what you want.
I have a Motoman robot. I need to install a vision camera to make the robot understand the parts center points. Is there any economical way to do it?
Motoman sells a vision system for their robot. I recommend you check with them to see if that vision system will work for what you need. Any other solution will cost far more for the engineering. However, be assured that your requirements can be met with a vision system.
How can I inspect any object from inside? We have a product which needs to be checked for its defects from inside. How do we check the dimensions of any object?
More information is needed such as how the inside is currently inspected without machine vision, what is the size of the article, what the inspection needs to check, and so forth. Without that information, I might suggest you consider x-ray imaging. The question about how you check the dimensions of any object is far too broad. There are many imaging and image anaylysis techniques that can be applied to measurements. Object size, shape, and material make a difference as well as what characteristics are to be measured. The measurement precision required also affects the techniques. If you have a more specific example, then it might be possible to provide you with information.
I'd like to integrate a vision system with a laser hole drilling system; the vision system will need to identify the form and size holes of the order of 500 micron dia +/-50 micron. Any suggestions?!
James -- What suggestions are you looking for? At first glance, this doesn't appear very hard, but I'm sure as more details emerge the challenges you face will become clearer. My first suggestion, whether you integrate the system yourself or have someone else do it, is to write out a good specification. There are resources on the web that can guide you in creating the specification. If you can't find any suitable guides for this, please contact me and I may be able to help.
Hi, question re getting an independent person to review an application we have with a vision system and the manufacture of nicotine patches . Would like to get a person to review the application and see if applicable for what we want it to do and dealing with the natural variability in the process.
Adrian -- I consult in machine vision and, during my 30 years of consulting, frequently perform assessments like you are requesting. I should mention that I do not sell equipment. So, my assessment will be unbiased. Please contact me either by e-mail or by phone to discuss your requirements and how I can help.
Hi, I am very interested in integrating a computer vision system into my warehouse. I have spoken to a few companies and am in the initial stages of trying to see if computer vision can be integrated into my production. Without a doubt I am sure if it has not reach a point where it can be integrated, the technology will get there in a few years. My main goal is to have computer vision technology be able to learn and differentiate between clothes such as t-shirt, pants, jeans, shorts, boxers, underwear, shirts, etc. when traveling down a conveyor. If you know someone who can help me I would love your referral. Also, I travel to third world countries often and am seeking to start a new venture in a foreign market. Thus, I think with Computer Vision can offer great competitive advantages in existing business. I would like to know what is the biggest computer vision conference as I would like to attend and get more information. I look forward to your response and working with computer vision technology in the very near future. Thanks.
So much depends on how the clothes are placed on the conveyor. If they are carefully placed so that a t-shirt always looks almost identical, then machine vision may have a solution. If the clothes are just randomly placed on the conveyor in a lump, then it may be extremely difficult to identify. Even if the first condition is met, there will be considerations of different sizes and different colors. Color may pose the greatest challenge -- there must be contrast between the item and the conveyor. However, the latest generation of software is getting quite good at dealing with low contrast. I would be interested in helping you scope out the requirements for your system and finding if there is a technology that can work reliably for what you need done. Please contact me to discuss this in more detail.
What research can we conduct using vision systems (machine vision) in agriculture today?
When you say agriculture, I understand this to be in the field rather than food sorting done in a plant. There are classic areas of application: automatic harvesting, field sorting (e.g., removing dirt clumps and foreign material), weeding and selective application of herbicide, and even pruning (of grape vines). Any of these tasks could be extended to new field crops that have not been adressed before. There is the whole emerging area of smart farming. Evaluating the growth and health of plants to make a map of the field for local optimization of fertilizer, pesticide, herbicide, and irrigation application. Autonomous navigation of field machinery may be one area where progress can be made. I do know that some effort has been made in that area.
What products should we use to clean the sapphire glass on our IM-6000 inspection system?
I have to agree with David. Ask the manufacturer. If it were only sapphire, then any cleaner would work. It depends, though, on whether there is a coating on the sapphire (e.g., anti-reflective coating) and what other materials are around the window. The first step, regardless of materials, is a gentle blow-off with very clean air -- canned air or a squeeze bulb are common sources (not most factory compressed air). Generally, an reagent grade acetone wipe followed by a cleaning wipe with reagent grade isopropyl alcohol. The right technique should be used to avoid scratching the optical surface with grit removed from the surface. There are other preparations sold by companies such as Edmund Optics that might be more suitable.
Hi, I am trying to get a quick answer about which type of light (RGB LED, IR or else) is suitable to the pattern recognition of the following components in my assembly machine: 1. The edges of a transparent glass. 2. Gold plated metal surface. 3. Etched silicon surface (there is some rainbow colors if use blue color light). Highly appreciate any prompt help! Wei
Unfortunately, you have fallen into the common trap of considering only the object being viewed when selecting the illumination. You need to consider contrast -- what you are trying to discriminate. For example, you are looking at a gold plated surface, but what are you trying to image? Looking for missing plating may take a different lighting scheme than looking for surface irregularities. I'm in the bay area. Why don't you call me and we can discuss this further.
I am working on a process for inspecting a 52" wide white web of fibers for any type of non-white contaminants. Ideally we'd like to detect any non-white item in the web that is 700 microns or larger. Due to upstream parts of the process, the line speed varies but is never greater than 20 yards/minute. The web is backlit and appears and is translucent in appearance (100 mils thick). Given the relatively slow speed that we are running, is it possible that we might employ used equipment (from 4-5 years ago) to reduce cost and still have an effective system. If so are there any integrators who would even be willing to work with older equipment. Any feedback is appreciated.
There are two parts to system cost -- parts and labor. Using used equipment might save a bit on cost, but nothing on labor. Often, labor is the larger of the two costs. Perhaps a bigger question is what economic life do you need from the system. Components become obsolete and no longer available for maintenance. Using older components shortens the expected economic life of a system. If you design with current components, you can reasonably expect 7 to 20 years of economic life. After that, the system would need to be redesigned with current hardware if the function is still required. If you use, equipment that is 5 years old, you sacrifice much of the economic life of the vision system. Considering the small cost savings and the rather larger loss in economic life, it rarely pays to design a machine vision system with old components.
I would like to specify the required accuracy of a vision system measurement. I historically have used the average of the absolute values of the errors being less than 1/20 of the total tolernace as a guide. Is this a good indication fo the accuracy of a vision system measurement? I also use six times the standard deviation of a repeated measurement to be less than 1/10 of the total tolerance as a guide for capability. Is this a good measure as well? If not what are some good ways to verify accuracy and capability based on using measured data compared to tolerances and know measured values?
Jason -- There's a lot to say to answer your question. My response will be very brief. There are two parameters: accuracy and repeatability (aka resolution). Accuracy -- the average difference between the "true" dimension and the measured dimension. Repeatability -- the standard deviation of repeated measurements on one part. In classic QA metrology accuracy must be 1/10th the tolerance and repeatability must be 1/10th the accuracy. However, this protocol is violated with regularity. In machine vision, because we have eliminated the human factor that was considered in setting the above ratios, we typically use smaller ratios. My preferences are to use 1/5th to 1/4th as reasonable for a final ratio of repeatability to tolerance of around 1/20th. Hope this helps. Contact me if you have more questions.
Can these systems be used to spot faults and imperfections in textiles (for example, denim), running at speeds of 60 to 100 yds / min. ?
The short answer is yes. Textile defect inspection has been accomplished by several companies. The devil is in the details. Most critical is what defects you need to detect and to what reliability. Contact me if you want help exploring this further.
Which of the vision systems on AIA would you recommend to inspect circular aluminum bars for surface defects? The idea is to use this at the end of the line as the final visual inspection. The parts are fairly large and as such would either have to be rotating to inspect all areas of the bars.
Many of the AIA member companies supply vision systems. However, many do not like to pursue flaw detection because of its inherent difficulty. Still, you will be able to find a number of companies that can help address your need. Your biggest challenge will be to come up with a specification supported by ample samples that can be used by a vision system supplier to provide you with a system. Please let me know if I can be of further help.
What tools can I usee to find the circumference of a circle?
The simplest way to find the circumference, is to use a blob tool to find the area. The circumference can be directly computed from the area. Let me know if there is some reason this won't work for you.
We would like to know a literature (book) or a course recomended about machine vision. We are working in a project where the width of steel strip is under measurement. The strip is under movement. We need to specify the correct camera, lens and illumination. Our software is the NI Vision.
Alexandre -- The most comprehensive reference on machine vision is "Machine Vision Handbook", Bruce Batchelor editor. This 3 volume set is practical, but extremely through. The information you want is in the books, but you will need to invest significant study time to get and understand it. If you want, I can work with you to identify a camera, lens, and light source that should work for your application. Perry West Automated Vision Systems, Inc. +1 408-267-1746
Hello! I'd like to know, is it possible to count the quantity of pieces in boxes with a vision system? It would be easier to explain if I could attach an image of the box as an example.
Is it possible to count pieces in a box? Often, yes, providing the shape and at least rough position of the pieces are known in advance. If the shape of the pieces in the image is not known in advance or their position in the box is not known (e.g., they are just piled into the box), then it is unlikely that this problem can be solved reliably. As Andy Long remarked, an image of pieces in a box would be necessary to give you more than mere generalities.
I am looking for a vision inspection system that could be mounted outdoors (on bridges for example) looking at the surface of a flowing water body to measure invasive plant fragments that are flowing by the camera. The measurement does not have to be absolute quantity, but some relative measurement of the "flux" of fragments in the frame. This is for use in measuring the effects of invasive plants infestations flowing from streams into lake water bodies. Do you have any recommendations for a vision inspection system?
Mike -- This is a very interesting project. I know of no commercial machine vision system that comes close to what you are looking to accomplish. I believe you will have better luck searching for a university to work with than with a commercial company.
Hello - I need to image a relatively large area (1mx1m) with about 0.5mm resolution. I think this means that the camera/lens system should be no less than about 4 megapixel, but of course higher resolution is preferred. I also need to stand the camera off the part by about 2m. The application is indoors where the lighting can be controlled reasonably well. It can be considered a static application - speed is not a concern. This is essentially a metrology problem where I am trying to measure a roughly 5mm wide x 1m long feature on the part. Are there industrial camera solutions with this type of resolution?
Glenn -- You give half the information needed to advise you about the image resolution you need. You mentioned that the feature you want to measure is 5mm wide. You did not describe the measurement accuracy you want to achieve. Both the size of the feature and the measurement accuracy determine the image resolution needed. Would a 4 Mpixel camera work? Assume 2,000 pixels in one direction (rows or columns) across a IM area. That gives a spatial resolution of 0.5 mm/pixel. This is probably enough to image the 5mm wide feature. (I say enough, because I don't have any information on contrast.) It also suggests you might achieve a measurement resolution as good as 0.05mm (again, several maybe's such as contrast and the conformance of your shape to the model used for sub-pixel resolution. Accuracy could range from 0.25mm to 0.5mm depending on more details.) So, yes, there are cameras available. If you share more information, I might be able to help more.
aside from Golden unit, what is the best way to correlate vision systems?
Jay -- your question mentions "systems" (plural). I infer from the plural that you want to know if you are getting nearly identical results from two or more nominally identical systems. A "Golden Unit" means, to me, a part that is as close to ideal as possible and should give a known result (within some tolerance band). Comparing the results from multiple systems to a Golden Unit only correlates one point on a spectrum of possible results. Yes, use a Golden Unit to start, and assuming good correlation between systems on the Golden Unit, collect a spectrum of other parts spanning the widest range of variations possible. Using those parts, look for correlation between systems. While the Golden Unit must give a prescribed result, these other parts do not need to give a prescribed result; each part gives results across the systems that agree within some tolerance.