ASK THE EXPERTS
More Answers From Werner Feith
We're trying to get a simulator for a GIGE vision camera and couldn't find a download page for any (a free trial one). Can you please help?
Mr. Azam, we have a compliant one, which is delivered in source code, but it is not free. Please contact us if interested.
The Camera Link specification does not specify format of the data  that gets transferred over the various 'Ports'. Is there another specification that should be adhered to when designing a camera with a Camera Link interface? Or is the intention that while Camera Link provides the low level electrical interface the actual data that gets transferred from camera to frame grabber is not standardized?  By 'format of the data', I mean pixel ordering, ordering of RGB data, etc.
Kevin, my name ist Werner from Sensor to Image (www.s2i.org) and we are active in the image processing standards from AIA, EMVA and JIIA. There are some missing definitions in CameraLink, which are nice-to-have or necessary in a modern camera, like a standarised control channel, pixel formats, self describing images and some more. These missing features were some of the resons to develop GigE Vision, USB3 Vision, CoaXPress and CameraLink-HS and from my point of view I would not start a camera development based on CameraLink any more. If we can help on the up to date Genicam standard mentioned above, let me know. Regards, Werner
Good Morning, I plan to use GiGE Vision interface for a new infra-red camera. On my design, i have a C667x DSP directly connected to a Phy Ethernet component. I have already used this interface to send UDP/IP packet. Could you tell me the different tools hardware and software i have to use : - libraries ... - is there a dongle?. - ... Could you also give me more details about the AIA subscription. Thanks for your help Best Regards
Mr. Alvin, GigE Vision is a protocol extension to UDP and as far as I know there are no free (and certified) libraries around based on the standard definition 1.2 or 2.0. If you are interested in a commercial support to get GEV goint on your TI-DSP let me know in a private mail, as Sensor to Image has both FPGA as C based verfied implementations for GEV. Regards, Werner Feith
My question is regarding the Gige Vision over 10 GigE. I saw Cameras with GigE Vision over 10 GigE. Is this standard is the same as the already published "GigE Vision v.2.0" by AIA with the difference that instead of using the GigE, it willl use the 10 GigE to achieved higher speed! or it is another new standard? in another words, what are the differences between the "GigE Vision v 2.0" standard and the new GigE Vision over 10 GigE? Many thanks for your kind help and support.
Omar, "GigE Vision v 2.0" includes as a transport medium any ethernet cable, so you can run GEV2 on 10MBit BNC -> 20Gbit LinkAggregated fiber (or so). This was possible with GEV1.x as well, but not really fully legal from specification point of view. The main difference, but many new features, in GEV1/2 are the different size for image- and block headers, so make sure what GEV standard is supported by the camera and application as eg. a GEV1.x-only camera will not work with a GEV2.x-only application. Regards, Werner
I am experimenting on GigE camera with Aravis. And I am coming across a few bugs in the system when it comes to a specific GigE cam. Is anybody playing with GigE cam and Aravis?
Hello Jemshid, I think John is right and if you ask google about ARAVIS you will see that the project is not really stable. There are several vendors on GEV libraries around, who constantly test their products. So if you need help, let me know, Werner
I am creating a new GigE Vision compliant camera and am defining all of the registers. Registers will be continually defined, added, changed etc. When the register definitions change, I want to be able to auto generate the XML against the genicam schema. Is there a recommended way to do this? MS excel :- / ?? Ideally I'd like to be able to version control this register set wherever or however it is stored.
Steve, I think you hit an important point and my experience as a GEV/U3V/CXP IP vendor shows no standard way to plan and auto generate XML files. But one note on "When the register definitions change" -> do not really change registers, as the XML is the central point of documentation of what you camera can/cannot do, so keeping the XML stable over all/most/... version will help a lot in all design aspects of you DEVICE design. And one question back to you. Can I present you question at the next GeniCam meeting take place in early October, as I think the existing group is not really aware of this problem for "new comers"? Sorry, Werner
Could you help to provide GigE vision "test lab" information in Taiwan? About company name, address and test fee etc. We want to make a GigE Vision camera device. We want to how to pass the standard test? Thanks.
Angel, if you regsiter you camera you will get the correct valiadation framework from AIA and this is software to be used first in your lab and then later on in a public plug fest. If you buy an certified IP core from us=Sensor to Image -> https://www.visiononline.org/product-catalog-detail.cfm/productid/2964 we can help with tools/support/test as well. Regards, Werner
Hello, I was looking through a lot of GigE Vision compliant cameras and I could not find one camera that support All-In packets. Could you provide a manufacture that do ?
Lucas, do you look for an application, camera=DEVICE or (host-)software=HOST around the all-in packet? Sorry to ask instead of present a simple answer, Werner
I'm developing an application for the acquisition by linescan with Xilinx ML605. I developed an IP that manages the LVD signal correctly. I can read the clock output from linescan and I can deserialize the data acquisition using a PLL. But, I have a problem with the correctly deserialization of data. I request the last update of Clink’s standard but I don’t find the position of data for deserialize. The problem consists in the identification of the bits of the pixels on the data lines. Thank you for your attention.
Maria, look at XAPP485 from XILINX. If that does not help give me a mail as we have IP around this, which we have running on Spartan3 and Spartan6. Regards, Werner Feith ------------------------------------------------- Sensor to Image GmbH, Werner Feith Lechtorstrasse 20, D - 86956 Schongau Email : email@example.com -------------------------------------------------
Hi, I have been using a PCI framgrabber card for video acquisition and used to process the video for real-time object recognition. Now we are planning to shift to external framegrabber by Pleora iport framegrabber. This captures video from analog video source and converts to GigE, which in turn is connected to the PC via network switch, and run object recognition algorithms on the video. Is this a good option for real-time video acquisition and processing for pattern recognition tasks? Are there any frame drops in this setup? Does GigE Vision software handle this kind of data and provide processing libraries for object recognition tasks? Please advise. Thanks and regards.
Emmanuel, John, is right with 2 additions: - the pure PCI bandwidth is a little bit high then the GigE one, so when you are running at the edge of PCI today you will have more trouble on GigE - we have similar systems. So maybe interesting so see a second offer. If more information is needed let me know.
I have an application with a GigE Vision camera and I need to convert the output into HD-SDI for input to a Vitek MGW PicoTough which compresses the data to H264 Ethernet packets. Any ideas for convertors would be much appreciated.
Mr. Goddard, hello from Sensor to Image. I can not give you a product for your application, but we would have some LEGO stones around to build you fortress=application. The main approach would be a software based approach on GPU modules like this one: https://www.toradex.com/de/computer-on-modules/apalis-arm-family/nvidia-tegra-k1 where GEV reception, H264 encoding as sending would all be done in C. Possible for your application? Regards, Werner Feith
We have a lens and a motor with image stabilisation features. I want to develop a system for mobile application. If I choose one image sensor ( for eg from Sony) , is it always mandatory to have Frame grabber or can I opt for GigE Vision. What is the basics of selection of image sensor or frame grabber?
Rose, the image senor selection and the (application software) image acquisition selection are totally unrelated, but the bandwidth of image generation and image processing, as these bandwidth should be similar. I know my answer might be a bit fuzzy, but I think you are seeking kind of architecture advice and/or selection, where a precise answer is difficult for me right now. So please start with the right sensor and here the aspect of light sensitivity, sensor read out speed, ... to get at this point the best image quality you get for your development, because the whole development might fail if you start at the wrong end. And yes, that setup might be expensive, but to strip down an expensive working setup to a cost-optimized setup in development will be ever cheaper than a failing development due to the wrong setup. Good luck, Werner
I have an application in which I would like to tap into the transmission of video from an automotive controller to an LCD screen to insure the data displayed on the screen matches the stored image. This will not include a camera but will require capture of the data from the controller. Has anyone developed an application similar to this? My counterparts in other areas of my company have done this but with specialized equipment designed to work with their video chips. I want to accomplish this with standard HDMI, DVI, and even VGA transmissions.
Jeff, looking at you question I started to look around and finally I got a link: https://www.digiteqautomotive.com/en/node/30 to an automotive product, not part of the industrial imaging world, which might help. So maybe take this as some inspiration as we have done years back similar products you request and described in the link, but they never made it to the market as a standard product. Quanity is at this point too low and the I/O standard, electrical as speed as standard as ..., are just to many. Regards, Werner
I manage an engineering group at Eastman Kodak company responsible for our line of high speed scanner equipment. We are looking at the potential for certain aspects of our technology to be a good fit for opportunities within the machine vision industry. High speed capture and image processing are key elements of our scanner industry as well as low cost manufacturing. We are looking to arrange a discussion with industry experts to determine what may make sense in this area. We feel we may be able to bring the industry potentially new capabilities and would welcome further discussion.
Robert, we had some customers in the area of large scanners (A1 and A0), which where looking for line scanning sensors, but they shifted the projects due to no good sensors around. Could I have some preliminary datasheets of the sensors you are talking of? Regards, Werner
Hello, I looking for the UDP specification of GVSP and GVCP protocol. How I can download these specifications. Regards, Stephane.
Mr. Barillet, the GigE Vision specification is free from the AIA at: http://www.visiononline.org/form.cfm?form_id=701 But if you are looking at a core solution for ALTERA as XILINX, please contact me as I can help.
Just wondering about the status using IEEE 1588/PTP as synchronization in vision systems.
Mr. Pfitscher, PTP is implemented in several products and working. Have a look at the standard booth at VISION show 2014 in Stuttgart, as the TC just planned a PTP based setup for the show. Greetings from the GEV standard meeting in Japan, Werner Feith
Hi, we are working with a sort of complex system with 8 GigE Allied Vision cameras running in 12Bit PixelFormat mode connected to a Switch with a 10Gig uplink to a virtual machine (ESXi) on which our software grabs the images. All Cameras are only used to take single frames, no video. At least 4 cameras are triggered by the same hardware trigger to grab a frame of an object. In the worst case even 8 Cameras may grab at the same time. In the last time we were facing problems with incomplete frames. We only have one try to grab the frame since the object will moving away. We tried with different parameters like PacketSize of 9K and Receive Buffers in the hardware adapters and virtual network adapters but this won't fix the problem for us. The switch itself does not report any errors and the network traffic seems not to be at any limit. It seems like the VM is having some trouble here. Using Wireshark to take a look at incoming GVSP Packets and outgoing GVCP Packets indicates that Packets are missing on the receiving side and requested for retransmission but only up to some limit. We can see that for ~200ms not a single GVSP Packet is received, like a short 'hickup'. What we want is to get this as robust as possible since we have up to 10 seconds before the next object triggers a new grab. So we have plenty of time to wait for the frames if only possible by the SDK. From my understanding several GVSP Parameters exist capable to be tuned that way but these are either not implemented in the SDK or some internal timeouts make it impossible. Has anyone experience with such a system and the GVSP protocol? Sidenote: We have set the camera bandwidths down to 10 Mb/s but this seems not to solve the problem. Looks like the problems lays in triggering multiple cameras at exactly the same time. Best regards
Mr. Schardt, I think this needs some local debugging as you have many bottle-necks in yoursystem which are NOT related to any GEV related operations. The good thing is that the Genicam crowd is next week quite near to your office and there is a plugfest Friday 11. November in Lüttich where some people might be interested in debugging your circuit. So if you are willing to take it to Lüttich, I am will to try to bring you in the room. Regards, Werner Feith