ASK THE EXPERTS
More Answers From Brian Durand
I need a custom two camera setup hooked together and more. Who can help with that?
Hi Randal, this sounds like something we can help you with, though we need to learn the specifics. Feel free to contact me at the email or phone number below.
Is there a light combination that can take pictures of light brown plastic bottle with water droplets to detect if the cap is on straight or if it is crooked? Cap is just slightly darker than bottle. Water droplets are causing problems. Can a blue light help or an IR backlight?
If space allows, a back light would make it easier. An IR wavelength, say 850 nm, may or may not help "see through" the water. Be sure your camera is sensitive enough in the near IR range, with any internal IR filter removed. Shorter wavelengths (closer to 800 than 900 nm) transmit better through typical lenses. You might also find that a polarizer/analyzer combination helps.
I have 2 applications where I need very uniform lighting across a large area (one is 52"x24", the other is 62" x 24", though other designs are possible). There are 2 camera is in the middle of this. I've reached out to several manufacturers/distributors for a light this large and with the specified ~25,000 Lux at .5 meters, but have been unable to get a quote. Does anyone know of a manufacturer who can do this?
Hi Matt, We could help you with this. We've recently worked on similar projects delivering up to 100,000 lux over large areas. Please contact me directly for info.
THE OPTIMAL WORKING DISTANCE I want to inspect an electronic board with the size of 100 mm x 80 mm, therefore I would like to have a Field of View of 128 mm x 102 mm. The camera that i chose has a sensor with the size of 6,784mm x 5,427 mm (This size is calculated from the resolution and the pixel size of the camera). I want to find the optimal working distance for this camera, which can be calculated with the following formula: Focal length = Working distance * (sensor size/Field of View). If a fix the working distance at 250mm, I shall get the following calculation: Focal length = 250 mm * (6,784mm/128mm) = 13,25 mm. If the working distance changes, the focal length changes too. This make me wonder if there is an optimal choice of the working distance and the focal length so that I can obtain a better image quality with a non-expensive lens ? Some lenses are characterized by the Min Operating Distance (MOD), what will happen if I take a picture from a distance smaller than the MOD ? Thanks for any anwser.
Hello, So long as the the working distance is greater than the MOD, you'll be fine. Choose the working distance that is convenient for your installation. That said, you'll notice that lenses having a shorter focal length will result in some "fish-eye" distortion. Basically the size/shape of an object will change as it moves from center of image toward corner of image. If you're measuring components on the electronic boards, then consider a telecentric lens. You'll find that the lens aperture has an effect on image quality. Most machine vision lenses perform best when set around f4, assuming there is sufficient light. Set the camera exposure time accordingly. Be careful to choose a lens designed for the size of the pixels in your camera. Megapixel ratings don't really mean much, as there is no standard. You may find this page helpful. It answers common questions and links to several lens calculators: https://machinevisionstore.com/tech/lenses Good luck with your project.
Hello, A question about Telecentric Lens: Can i measure a hole that varies the height from 90 mm to 130 mm and have a diameter of 8 mm with a Telecentric Lens of 16 mm FOV, however, the Object Depth of 2 mm? Because to increase the object depth it is necessary to increase the FOV, however, If the FOV increases, the resolution have to also increase.
Hi Daniel, To confirm I understand your question correctly... Your working distance (measured from front of lens to the object) varies from 90 to 130 mm. The hole in the object is 2 mm deep. The necessary FOV is 16 mm. We really need to know the camera sensor size to determine the necessary magnification. Then, given the pixel size and lens F#, we could calculate the depth of field that is needed to accommodate the changing working distance. As an example, consider a camera having the Python 5000 sensor. The sensor has 2592 x 2048 pixels, each 4.80 µm in size. The ideal magnification would be 0.614 for a 20.2 x 16.0 mm FOV. One popular lens manufacturer offers a 1" lens with 0.767 magnification. Combined with the image sensor, you get a 16.2 x 12.8 mm FOV. With a fixed working F# of 16, the lens has a DOF of just 2 mm. Bottom line (assuming I understood your requirements) is that no conventional lens can delivery the huge DOF you need. Consider moving the lens as needed.
I need to develop a Windows based device driver for a GigE product. Is there any sample WIndows device driver source code available for reference?
I'm going to assume that by "GigE product" you mean a GigE Vision compliant camera. As John mentioned, Pleora is a fine option. Other suppliers have also wrapped the low level specification with their own higher-level API. Some, such as Matrox, Cognex, National Instruments and MVtec, license (almost) universal drivers for a fee. Another good option is the SDKs offered by most camera manufacturers. They tend to be free, but only work with that manufacturer's cameras. Basler Pylon is an example.
Hello, I am looking for way to photo id different anode blocks. In identifying the blocks, I also need a way to Quality control the anode blocks (detect them for cracking, discoloration, and spalling). I am not very well versed in this field so any advice would be greatly appreciated.
The best advice I can give you is to partner with a company that specializes in integrating machine vision. Many of these companies do only machine vision, so don't compete with more general control system integrators and machine builders. This approach will minimize the financial and technical risks, enabling you to deliver a great solution on-time. You can look over the specialist's shoulder as they develop the solution, learning about the technology for your next project. The AIA has a list of Certified System Integerators: http://www.visiononline.org/mvo-content.cfm/machine-vision/AIA-Certified-System-Integrator-Program/id/187. Note the list includes our company, i4 Solutions. http://www.i4solutions.us
Hello Vision Experts, I am looking for a very professional System Integrator for visual inspection of integrated circuits. Which System Integrator in the US do you recommend? Thank you in advance.
Hi David, Thanks for posting to the experts! I'd suggest talking to a couple of the AIA Certified System Integrators. You can be sure these companies have the expertise to succeed on your project. The list of certified integrators is in the right column on the page below: http://www.visiononline.org/mvo-content.cfm/machine-vision/AIA-Certified-System-Integrator-Program/id/187 Send me a private message if you'd like to have a brief discussion to see if your project is a fit for us.
I currently have an outdoor vision system that detects logs on a stepfeeder. I am using a Cognex insight 7050 with 3.5mm focal length lens to provide me with a FOV of around 3 meters. This works well except I am having problems with sunlight causing glare and shadows that either causes the camera to false detect, or not detect at all. Could anyone recommend lighting/lens/filters that may help?
You can try a polarizing filter to reduce glare. A hood around the lens might also help, though that may be hard to accomplish given your lens's very wide field of view. I assume you're using the sun as your source of illumination, so you have little choice but to accept what it gives you. The alternative would be blocking the sun (a roof?) and installing your own LED lighting that you can control. The geometry (direction) and wavelength of the light will be important to optimizing contrast. If I understand your brief description, light coming from behind the log may be a good solution. Also, a camera having a better dynamic range would be more forgiving of the bright and dark areas.
I've been working with laser profiling sensors to generate 3D images and inspecting them with vision tools. Usually these devices only give you what is called a height image. Using this image I can measure almost anything I want but I cannot read a 2D code printed on an object or inspect the color/grayscale value of the same object. Is there a device that can deliver both Height Image and "normal image" (similar to one from a camera) or should I just use a profile sensor and camera at the same time?
You might consider grabbing an additional image with the laser turned off and an appropriate 2D light turned on, in order to view the 2D code. Or, your friends at Sick make the Ranger camera that can grab both 2D and height info.
I am working on outdoor system for detecting fast moving objects. I need an inexpensive solution - 4 cameras shooting @ ~40 fps, 2MP, GIGE, frame level synchronization - in this case minimum 25 ms accuracy (1/40). I need roughly about 100 degrees H viewing angle per camera. The idea is to detect the objects and recreate their positions in X-Y-Z. Does anyone have any idea about the hardware setup? Its really tricky to find the right combination between sensor and optics to achieve the desired FOV at reasonable price (camera + housing + lenses should not be more than 500-600 USD). Another thing is the the synchronization, I want it to be as simple as possible, no additional trigger boxes and cables, so the sync must be done via PTP. The problem is not many cameras support PTP which makes the choice even harder or impossible.
Hi Ivan, thanks for posting. Here is some info I think you'll find helpful. Basler has a white paper about synchronous capture using their cameras: http://s.baslerweb.com/media/documents/BAS1601_White_Paper_Multi_Camera_applications_EN.pdf I recommend the new Basler acA1920-40gm camera. Great for outdoors, given 73 db dynamic range and 70% quantum efficiency. Great value, and not too far from your goal. Note dynamic range comes in part from the camera's large pixels, which requires a higher cost 1" lens unless you can reduce resolution (AOI) to, say, 1440 x 1200. See https://machinevisionstore.com/Catalog/Details/1540. Regarding field of view, there is a lens calculator for this specific camera here: https://machinevisionstore.com/design/EntrocentricCalculator?cameraProductId=1540 Good luck with your project.
Hi, we are working with a sort of complex system with 8 GigE Allied Vision cameras running in 12Bit PixelFormat mode connected to a Switch with a 10Gig uplink to a virtual machine (ESXi) on which our software grabs the images. All Cameras are only used to take single frames, no video. At least 4 cameras are triggered by the same hardware trigger to grab a frame of an object. In the worst case even 8 Cameras may grab at the same time. In the last time we were facing problems with incomplete frames. We only have one try to grab the frame since the object will moving away. We tried with different parameters like PacketSize of 9K and Receive Buffers in the hardware adapters and virtual network adapters but this won't fix the problem for us. The switch itself does not report any errors and the network traffic seems not to be at any limit. It seems like the VM is having some trouble here. Using Wireshark to take a look at incoming GVSP Packets and outgoing GVCP Packets indicates that Packets are missing on the receiving side and requested for retransmission but only up to some limit. We can see that for ~200ms not a single GVSP Packet is received, like a short 'hickup'. What we want is to get this as robust as possible since we have up to 10 seconds before the next object triggers a new grab. So we have plenty of time to wait for the frames if only possible by the SDK. From my understanding several GVSP Parameters exist capable to be tuned that way but these are either not implemented in the SDK or some internal timeouts make it impossible. Has anyone experience with such a system and the GVSP protocol? Sidenote: We have set the camera bandwidths down to 10 Mb/s but this seems not to solve the problem. Looks like the problems lays in triggering multiple cameras at exactly the same time. Best regards
Although I'm not familiar with the AVT drivers, many GigE drivers have parameters intended to solve this problem. Specifically, camera parameters include Frame Transmission Delay, Inter-Packet Delay, and Bandwidth Reserve. Set the Frame Transmission Delay to different values for each camera so they don't all send at once. Assuming you don't have those parameters available... It isn't clear to me whether the problem is in the switch or the VM. If the switch, you might think about a model having a chip dedicated to each port. If the VM, well, I wish you luck.
Hello - I need to image a relatively large area (1mx1m) with about 0.5mm resolution. I think this means that the camera/lens system should be no less than about 4 megapixel, but of course higher resolution is preferred. I also need to stand the camera off the part by about 2m. The application is indoors where the lighting can be controlled reasonably well. It can be considered a static application - speed is not a concern. This is essentially a metrology problem where I am trying to measure a roughly 5mm wide x 1m long feature on the part. Are there industrial camera solutions with this type of resolution?
Hi Glenn, This reply might be coming a bit late, but nonetheless... Yes, industrial cameras having 2 - 5 MP are pretty typical these days. I included a link to one such camera below. Given this camera's CMV4000 image sensor, a lens having a 25 mm focal length would image 1 x 1 meter from a distance of about 2.2 meters. Depending on the optical characteristics of your object, the 0.5 mm/pixel resolution may, or may not, suffice. https://machinevisionstore.com/catalog/details/722
Can you tell me how to eliminate modification of ambiental light on my camera image acquisition? I have a B&W cognex camera with integrated red led light and I want to find a simple area of a black blob, but ambiental light is changing and my blob is changing his brightness.
If the ambient light is sunlight, there isn't much you can do other than to block it. Otherwise, mount a red bandpass filter on the lens. You'll want the filter to match the wavelength of the light. For example, if the LED outputs 660 nm light, use a 660 nm bandpass filter. This will block other colors of light, hopefully enabling the camera's light to overwhelm the ambient light. If that doesn't work well, you can replace the camera's light with one more powerful, and combine that with a bandpass filter..
I am a concrete construction quality professional active in many industry associations, but I know very little about image capture and analysis. I am looking for a combination of hardware and software to measure voids in concrete surfaces after the formwork is removed. Our industry currently has only subjective measurement of surface to void ratios and color uniformity; both important aesthetic attributes. I would like to build a device to capture images under controlled lighting and reflectance, and then use software to determine void / surface ratio and color uniformity. Several white papers have been written on this subject using Matlab or ImageJ for analysis of greyscale images, but the devil appears to be in the details - controlling variables.
Hi John, that sounds interesting. We develop machine vision solutions, and have previous experience inspecting concrete blocks. I know that's different, but perhaps not too different. I'd be happy to discuss this with you in detail.
I am looking for a low cost solution to sort golf balls by make and model. We intend to use OCR as part of this solution. Since most ball manufacturers use alpha/numeric model numbers and logos. The challenge is getting the right orientation of text and logo for reading. Is it possible to read in multiple orientation?
Hi Chris, The short answer is "yes." Some OCR algorithms can handle random orientation. However, you will likely need to warp the image to correct for the curved surface of each ball. Any certainly you will need many views of each ball to get the necessary information. Let me know if we can help.