Member Login
 

Camera and Image Sensor Technology Fundamentals - Part 1

Camera and Image Sensor Technology Fundamentals - Part 1

Part of the AIA Certified Vision Professional-Basic program, Steve Kinney, Director of Technical Pre-Sales and Support at JAI, Inc., teaches the fundamentals of camera and image sensor technology. You'll gain an understanding of camera design including CCD and CMOS sensor technology.

Advanced Training

Subscribe for access to Advanced Certified Vision Professional Training

Click Here

Exam Locations

Locate a Certified Vision Professional exam location and test your vision training knowledge.

Click Here
Click to View Transcript I’m Steve Kinney Dir. technical sales and support for JAI I’m also KIA camera link Committee Chairman I’m electrical engineer who began my career in the United States Air Force as an avionic inertial navigation specialist I’ve done several years in beginning in 1990 back to Silicon Valley in product design and development and began my career in machine vision with the camera company in 1997 my years in the Air Force along with product development to be a broad perspective on all types of customer situations equipment Weatherby pneumatic or electrical and can be broad base of experience for helping customers and occasions today I’ll be talking to you about the fundamentals of Cameron image sensor technology teaching the basic course for the EIA CVP may speak in four segments today beginning with light basics and CCD CMOS imager fundamentals continuing to digital camera principles interfaces and finally camera types and when to use them to begin with the light basics today light is a piece of the electromagnetic spectrum which includes everything from longwave radio through your TV microwaves finally moving into the light bulb infrared visible and ultraviolet and continuing on through gamma rays and x-rays I was quite this out because a lot of the students don’t recognize that flatlined is in fact an electromagnetic wave in the same as radio and one of the key points to be made here is light is actually a very narrow band we look at the total spectrum of RF energy here light is a very narrow band and we talk about applications where she talk about dividing the light up into on finer detail from near infrared to the visible section to the ultraviolet and rectory slicing the narrowband into even knocked narrow pieces to work on her applications here were primarily interested in wavelengths of light from 200 to 1100 nm with that the bulk about being visible light from 400 to 750 nm this is the energy that your lights that your IC as visible light and that most cameras focus on for imaging but for machine vision we also see applications in the near UV from 200 to 400 nm and sometimes near infrared light is also use from seven 52,000 or 1100 nm light is represented is both a particle and electromagnetic way like particles called a photon photons have some energy in the amount of energy in the photon determines the wavelength so since the speed of light is a constant if there is more energy and light vibrates faster and it becomes a blue wavelength wears up as well-written energy in it it will vibrate slower and be a red wavelength the wavelength corresponds to the colors I just described and intensity light is represented by the number of photons being emitted by a surface in the color light is the wavelength of so all said sensors in the cameras then worked on was called photoelectric effect photons are converted to electrons like hitting a silicon surface world this large electron when the light hits it feels at all the valence orbit and the electrons dislodged and is held on the imager in some fashion counted in some fashion which will discuss shortly the number electrons then released you can on the intensity and the wavelength of light for sensors we often see the quantum efficiency home represented on the data sheets quantum efficiency is the ratio that the racial light that the sensor converts into a charge so 60% quantum efficiency means that for every 10 photons hitting a pixel surface the six are converted into charge QB is sensor specific camera design does not affect you recur so these curs that you see here are given to us by the sensor manufacturers that we use in the cameras and while we want to do good things electronically in the camera to maintain the best from the sensor nothing we do in the camera can change the quantum efficiency or the response lessons are too white bomb she really is also given in absolute or relative terms the sensors that are showing here you’ll notice a saint absolute quantum efficiency this means they are absolute in that effect they just talked about so if you have 60% QB then that means six out of every 10 photons are hitting the surface and being registered on the sensor some sensor manufacturers give quantum efficiency and was called relative terms relative quantification efficiency simply means that the manufacturer takes the peak of the curve whatever that peak may be and they normalize a 200% so and these curves there is the sensors slightly over 50% so this manufacturer would take a multiple of just under to multiply this peak talking to 100% and then redraw the curve multiplied by that factor relative QE is good for that sensor only so that you can tell as I changed wavelength like coming in or if I were using monochromatic sources how much response I get from that since only however because it’s been normalized by a factor with unknown to the user you can never used relative quantum efficiency for comparing one sensor to another even if they’re from the same manufacturer the full well capacity of the sensor or of the pixel is a number electrons that register any specific in a pixel larger pixels have a higher well capacity which also tends to Lynn Lynn lead to higher sensitivity better signal-to-noise and increased dynamic range so these are typical these are not absolute but if we look around it. Sensors that we find in the industry very small pixel sensors helpful will capacities around 4000 electrons and more medium-sized pixel women have well capacities more like 10,000 electrons in larger pixels can hold 50,000 or more electrons so that what I talk about applying somebody’s basics and talk specifically about CCD and CMOS sensors what’s going on with them how their converting the light to the charge and what they can do for us in imaging the main difference between CCD and CMOS sensors is how they chat transfer charge out of the pixel and read it out of the camera into the machine vision system CCD’s recharge and call off the face CMOS imager’s click charge in the pixel in an readout palm circuitry in the axle lets it read it all the camera when we think of the CCD sensor we can think of it as a bucket brigade we can imagine a CCD pixel is a bucket collecting on why much as a bucket out in the open would collect rainwater when it rains and one satellite is collected a CCD type sensor then shifts that light on the surface of the sensor and out of the sensor into the camera body so CCD stands for charge coupled device that’s important because charge coupled device means a CCD imager is our current driven devices the charges physically collected as a number of electrons in those electrons are collected in pixels and moved on the sensor surface the charges physically shifted around by various voltage barriers and as the errors are dropped charge to be moved from pixels and the shift registers and then clocked out the camera so when the cameras beginning to read a frame drops all the charge vertically into a horizontal shift register at the bottom of the imager reads out the horizontal shift registers and a line in the image and increase another vertical shift from all the charges down one more into the horizontal shift register reading those out and so forth the last line is made to the horizontal shift register and is clocked out again much like the bucket brigade on the previous slide dispersed in first out type situation the CCD output pin is an analog holes with the charges proportional to light intensity so when you readout this sensor a CCD is a quasi-analog device the doublet produces actually analog blip even those being controlled and read-out with digital pulses to control the timing and the readout CCD spy nature quasi-analog micro lenses increase the photon collection and of the area of the pixel focus of photons into the photosensitive area of the pixel so we’ll talk about this even more and CMOS to both CCD and CMOS imager’s you have a pixel area but the pixel area has to include other home structures readout charge all the pixel and controlled the pixel on shutter time and read on all the other things to get the information from the pixel into the camera that means by definition that the whole pixel cannot be photosensitive to electrons means there’s only a small area usually call the photodiode that is sensitive to the photons coming in and the rest of the structure is then not photosensitive so to increase this photosensitive area and to increase the quantum efficiency overall per the curves I showed CCD’s tend to use a micro lens over the surface the micro lens can be thought of is just a big magnifying glass over that the pixel area essentially the manufacturers able to create this over every pixel and all that’s happening is the wife is coming in at oblique angles that would not normally strike photosensitive area would come into maybe strike any a shift register something are bent through the magnifying glass to be focused down into the pixel area almost all modern CCD’s use micro lenses there’s very few exceptions there are some application exceptions especially in the UV work micro lens is undesirable but home is a starting place almost all visible CCD’s use micro lenses today the Pro is a effectively increase the quantum efficiency of the pixel the con is that they create an angular sensitivity to the incident light right so even though the thing the liking come in at a steep angle be bent in the photosensitive area is not perfect and we can see it roll off in the response of the pixel to light with the angle of incidence of the light to the pixel itself to what you see in this diagram is that if this is a Lanza and this is a CCD service year raise coming on axis through the center lands and being focused on the pixel are then hitting the pixel essentially on axis and code went here and the micro lens can very effectively been all the raise into the center of the photosensitive area on the pixel however if we look at a peripheral way coming in at a steep angle and will see that that the pixel in this area of the sensor has to bend all the raise coming in and that one can do this efficiently is some of the steepest raise coming steep angle even through the mic lines will be bent in a way that it can’t quite make the sinner photosensitive area this means that the quantum efficiency is then expected by the angular incidence of the rate itself so we can see this is actual chart for a Kodak key hi 340 CCD will see a special end of vertical that the CCD has a very high efficiency with angle it is angle increases beyond about 10° that the efficiency begins to fall off the wall also notice they give this to us in terms of a vertical efficiency and that horizontally there is a different efficiency and that in fact that the this particular imager is more sensitive the horizontal shading across the horizontal on of its surface and this is due to the fact that the photosensitive area is not always in the center of the pixel and the fact that it may not be square fact most cases is rectangle and thus you see that there is a larger vertical collection area than a horizontal collection area and that we talk about these micro lens of fact it affects these differently so applications that are very dependent on uniformity and shading need to pay attention to this because the sensor cell will have a uniformity response but assuming all the raise on on axis and that if we had a lens to the system and it brings raisin out of acts off axis the.in fact will affect the uniformity and in general the wider the angle of field of view of the lens steep results raise on their peripheral and more vertical and horizontal shading we are going to see micro lenses increase the pixel effective area this is often called Phil factor and they produce a high Phil factor horizontal lines are shifted down the surface of the sensors we described pixels are read-out horizontal shift register through a common circuit including an amplifier to give us a pixel readout so the main advantages of CCD’s are sensitivity the main disadvantage is then speed this comes back to what I was saying about this the charge being rolled on the surface of the device because the CCD is a charge coupled device we’re physically collecting those photons in the pixel dropping the bear and actually moving those that charge along the phase of the CCD for those of you that have electronic engineering background to remember certain number electrons make a coulomb 1 C flowing in one second is one hamper current and thus we talk about CCD sensors that are holding 50,000 electrons in a pixel and may have the 10 million pixels there is actually a large current flowing on the surface of the CCD Ellie of course trying to the resistance alone keep that power down but by nature you are moving current on the piece of the CCD and that is thus developing power on the CCD face itself for something like the Kodak four megapixel sensor there’s roughly half an amp flowing to readout 4 million pixels 15 times a second and a half an amp again and then makes a little over a want on the face of the surface of the sensor and again besides the powered the disadvantage that is there’s a speed at which you can move those electrons around on the CCD surface other issues for CCD is begins with blooming blooming is spread charge to adjacent pixels due to saturation of pixels especially in cases where you have a bright light on in this case the sunlight in the image and you may be using a high-speed shutter there’s a limit to how many photons hit the surface in the CCD can either readout through the shutter. Wars discharging those electrons not using the shutter. But there’s a limit to the amount of photons that can be discharged and when they begin to oversaturated and spread to adjacent pixels and and make blooming issues such as you see in this image similar to blooming there’s smearing or vertical streaking and this is also caused by the pixels being saturated light spilling over into the next column of pixels so essentially what’s happening here is like spilling over but as I described is vertical and horizontal shift registers a light spilling into a vertical shift register so even though that this is the hot pixel this is hot pixel last year on these other pixels that have discharged when a role because the whole images being shifted vertically down the phase then it rolled into shift registers that were used by other pixels and continued to streak in there this is why you get a vertical shift are the streak and you’ll notice even on the bright side is called as lupus smearing going on here as well from the daylight now a CMOS sensor then works a little bit differently CMOS stands for complementary metal oxide; conductor CMOS technology usher came from the development RAM chips so in the early days of RAM for computers and even nowadays sometimes you still see him erasable EEPROM’s and those the problems on were sensitive to UV light so engineers can program around that they wanted to use with certain instructions load them in the computer and they wanted to change it they would expose an area in the RAM to UV light he race it and start over and someone early on recognized that hey these these registers arranged in array on on a chip ended if this thing were looking at why I might actually be able to create an image so there’s been deviation of course CMOS imagers are optimized for imaging not storage RAM data and vice a versa but the technology really came from the development in the early days of the computer CMOS imagers are voltage driven devices and this is important so a CCD we talked about collecting charge enrolling the charger all surface CMOS imager light striking the pixel creates a voltage for portions of the proportional to the intensity so the physical effect is the same they light is still knocking a electron off in the silicon but in the case of the CMOS imager that that electron is making a voltage in the well as opposed to a number electrons are being read out which is the current in the well the voltage is sampled directly at the pixel digitized on the image imager and cleared for the next frame is never completely rolled around on here there are issues stated to clearly imagers to dump the ground but essentially there not moving the charge round in the same way it’s a voltage driven device consequently the CMOS imager unlike a CCD imager has a totally digital output by time the images coming off of the imager itself is read-out is a a binary number which will talk about and is a digital sample coming off the sensor so see CMOS cameras are totally digital devices were the CCD camera contains a quasi-analog section and in on the manufacture may have more or less analog circuitry and some CMOS works by having the ability to have multiple layers as a comp literate complementary metal oxide semi conductor which means they can have complementary layers building devices on on the sensor cell this is called stack up as they build these layers what is typically happened then is CMOS imagers because the stack up we have a photosensitive well down in the bottom of the active area but because it’s built with layers is literally a photosensitive area and a well which has some depth and because of this structure then CMOS imagers typically have not use micro lenses you’ll notice in the latest generation is CMOS imagers and some of the Maxi advertise now that they have a micro lens and that there increasing the sensitivity do this might lens again helps increase the quantum efficiency which in turn helps increase the sensitivity but these are newer developments is CMOS and coming from manufacturers who have actually made work on thin metal films to make these layers thinner on so you can see today on CMOS imagers both with and without micro lens starting point for CMOS as it usually does not have micro lens and Les is something designed for higher in damaging which the manufacturers taken special steps to all limit the depth of these layers and install microloans so between not having a micro lens and some physics involved in the in the silicon on CMOS usually has a lower sensitivity than CCD just buy that the nature of this is going on the bike went help schedule backup but there’s still some other physics that that usually make it a low but lower national charge converter CMOS also because the images digitize on the sensor tends to have an active amplifier either on every pixel or least on on columns are multiple columns and the structures and repeated on throughout the sensor in order to get the active readout not have to flow the charge around the downfall to that in will see some examples of this coming up with the downfall to that is every a duty amplifier has a little bit of variation is Wellstone random thermal effects and that fixed variation between all these massive parallel port of ADD circuits and amplifiers causes a fixed pattern noise in the amateur because is not changing the gain of this AV circuits always little higher than the one next to it and thus not changing and despite manufacturers working to keep these uniform that are never perfect and so typically AAC one of the key noise sources and is CMOS imagers fixed pattern was in the background however because again there not rolling the charge around CMOS imagers tend to be more resistant to smearing her blooming than a CCD so voltage sampling is faster than rolling charge a CCD this again is akin to just taking her cell scope probative or high impedance of reading and checking what the voltages instead of rolling the charge out trying to roll through a correlated double sampling circuit and read what the charges are literally count electrons in a CCD West lower charge means less power on voltage sampling versus Royal charge round means more speed so the main advantages of a CMOS sensor over a CCD sensor then tend to be speed and power consumption and you can see this even in the consumer digital camera market of the most early digital cameras tend to be CCD they tended to be lower pixel counts and lower quality on what we have today is the market was evolving an in order get sufficient quality that typically use CCD that meant battery life and other things as these consumer manufacturers of pushed on the CMOS development they got sensors which are more than good enough for the consumer market and then the advantage of the battery life especially with flash and other than comic pictures I can take takeover and today you see almost exclusively in the consumer market for CMOS imager in the machine vision market were concerned with a very high quality and often dynamic range of Wally’s other minor effects compared to a consumer camera and you see the CCD stop the place for quality and high dynamic range and very low sensitivity in these types applications that we see CMOS imagers been use more more today on because of the advantage of architecture and because of on advancements in the technology itself making them were more equal but there still trade-offs between CCD’s and CMOS imagers to be considered by the user there’s no one clear answer the main disadvantages then as I mentioned a CMOS tend to be sensitivity and pattern noise worth X related to the so among one of the other issues for CMOS imagers is also rolling shutter and I have a diagram which shows will more clearly but essentially in a CCD as I showed on everything is done on the surface of an airline transfer CCD which means the second substrate below that can be an entire groundlings a CCD imager generates a a electronic shutter by simply being able to take all those pixel wells and ground them to substrate all once the substrate is large capacity compared to the pixel capacity enabled them very quickly discharge a very large amount of current and make a complete on charge efficient discharge of all the pixels very uniform in CMOS then they have to get rid of the charge the same way but all they have to do little bit differently the starting point for CMOS is than rolling shutter this means that it’s only good a readout expose one line read-out line that exposes the readout outline so if these images were equal resolution and they both had 1000 vertical lines with the CCD and electronic shutter all 1000 lines are exposed simultaneously in the motion is frozen simultaneously from the top of the image the bottom there is no temporal time difference in the exposure from the first line to the last line in a fact this motorcycles moving at over 160 miles an hour and we can see the spokes in the holes in the CalPERS and everything is perfectly frozen here the CMOS imager if this hat frame had 1000 lines there’s 1000 different exposures that took place in the frame kinds of the of the frame time is relatively facile say this was 100 frame per second fast CMOS camera is still 100th of a second to take the frame and within the hundreds of the second is still literally 1000 exposures that occurred now when we look at this this camera was taken out of a moving car were someone like the scene is to picture out of moving car I’m a people really think that this railing was built slanting forward in the direction of movement of the car it of course was not and of course is the fact where this book of the pixels made bar read-out at one time and is the car was moving forward in the was read out the ‘tends to make everything shift in the slanted direction the other thing I will point out what we have this image up here is that the excellent rolling shutter proportional to the velocity in pixels per second and of course velocity pixels per second for the pixels are close to moving cars much higher than the background so you notice the rolling shutter effect is very high on something close to the car where the Lord perspective movement and something like this rail is compared to the island in the background is relatively unaffected for this because it’s a very wide field of view and there is very few pixels per out there would be pixels for hundreds of feet this is a diagram showing largest described so this is a CMOS issue and essentially what’s happening is this is an integration. So it integrates work. In the readout outline that comes down integrates the readout outline and as you can see it in their timing diagram the society of the Bilbo timing diagram and this is exactly what happens to your image distortion was the only caveat is that if you can have in a machine vision application if you can have an entirely black background was no external light leakage and he could set off the strobe is a chance to freeze the motion in a rolling shutter type of scenario with a strobe but again you have to have a very controlled circumstances but no wine kit in other parts of the area where the line is photosensitive but you do not want to see the integration of that life so coming back to our example if we take an image of an object moving horizontally through our field of view with a CCD camera and global shutter for CMOS camera global shutter whether CCD electronic shutter CMOS as global shutter the object will be frozen that will be one exposure for the entire frame but is CMOS imagers with a rolling shutter then again there are multiple home exposures within the frame and we see a displacement where the displacement is proportional to the horizontal velocity and pixels for time through the frame and again this is showing the same image coming back to the beginning and we can see the effect in the bars because they are moving very fast passed a car that is very close to them and they again in the distance you don’t see it because velocity in pixels per second is much less when we talk about CCD and CMOS imagers and this comes back to mostly CCD’s in the days of TV interlaced scanning but there is progressive and interlaced scanning methods for readout and cameras there are some interlaced CMOS imagers on the work developed early on to intentionally mimic the TV interlaced formats but for the most part interlaced scanning tends to be a a nature of his CCD device so the image from the cameras for my sequence of pixel line scanned and displayed in one of two different ways from regard or progressive scanning means all the lines are scanned and exposed at the same time and then read out for interlaced scanning it means there is exposure and then a readout of the odd-numbered lines and then correspondingly there’s a second exposure and readout of the even-numbered lines and the odd in the even-numbered lines and fit together to make a complete frame so this again comes back to and you see several things come back to the early days of TV scanning here so interlaced scanning was used in normal TV systems which were developed all the way back late 30s early 40s and what’s essentially happening here is at the time the technology was developed and they wanted to broadcast it over radio frequencies we didn’t have HDTV and satellite at the time this technology was developed they literally couldn’t capture all the pixels broadcast them and re-create them would take too much bandwidth so the what what the book essentially the Americans came up with in the early days of TV and the NTSC systems was interlaced sees step back and get everything into a high-bandwidth signal and read the mouth and then re-create it with high speed scanning on a CRT when your TV in 1940 was disbanded on the large magnetic devices inside they simply slowed the scanning rate in half read-out half the number of pixels so your TV camera within capture as I described the odd lines shown in gray here and were captured lines 1357 911 a M, shutter freeze them but then it would readout windows locked in a would capture her second exposure shown by the black line 0 the even-numbered lines 246 and a and so forth through the imager freeze them and then read those out and this would give your TV to scanning time to scan in the same matters in the TV I have the scan apparatus on up the odd-numbered lines of what they said oh now I go back and think lines in between so this helped on with the transmission and an the creation of the imager in the early days of broadcast that actually had one advantage to that and that was the fields that though the frame rate itself was only 31st the fields are treated as 60 Hz aware I have high-speed motion then that the field nature of this at 60 fields per second lettuce capture motion accurately because interlacing to feel the 60th of a second into a frame authority of the second floor watching on TV are eyes were relatively immune to this surprise could see both the motion and find the resolution even though there was some blurring and you’ll see the blurring occurs this is nice for TV and for eyes but in machine vision system were usually capturing the image because we wanted to some analysis on it and then the kinds of boring and effects a come from this interlaced or than bad to us in the machine vision sons so this is an example of the soccer ball that is been kicked with interlaced blurring in the sexual capture off the TV type of card and you’ll see the reason is spiky is because what I said have to lines are read out and then the other half are read and what you see is the leading edge lines here you’ll notice on the edges L’Oreal here on the sewer lines that were captured first and then the ball moved actually I should say this way the trailing edges captured first in the ball move forward this way and in the leading edge was capture and you’re actually seeing the physical separation of what was a time together frame is now separated by the difference in the exposure and the movement of the ball so again for eyes we once and I had the ball that was kicked by a soccer player we would never see the pattern on the ball anyway we would discuss see the ball and Lucy’s movement but it were machine vision system and were starting to capture frames and do analysis on this is ugly from the machine vision if the machine vision were trying to do an object line and say put a line around the ball maybe test pattern on the ball to be having a hard time right now in progressive scan also called not nonelderly scanning one field equals one frame the solves this on in in progressive scanning there is only one exposures common to all the lines the lines within read-out as I described on the first pixels coming off the imager and sequentially all the way back to the top of the imager last pixels read-out the benefit to this then is that the full frame is available as a result of a single shutter event the downfall to this is that some nonstandard scanning systems and no longer does this progressive scan source plug-ins your TV or VCR or as any TV format camera plugged into any VCR any other device and not always a not plug-in but it means that if I use a frame grabber whatever it capture that nonstandard device with SB configured because once I leave the TV standard format there is no standard Gleason analog realm for the image that is created will talk a little bit about digital standards and what’s happening later but as far as that of the video format of the coming off the can be anything if it says progressive scan you can tell what is happening if this is TV interlaced you know that there is just under half 1 million pixels therein in NTSC scanning in the US of the 768 x 494 active pixels and that there built into 21 interlacing we can build something if I tell you I have a one megapixel progressive scan you don’t know anything more than there’s 1 million pixels there I have to tell you that it’s a one-to-one aspect ratio is 0.125 x 1 K or that it might be 1200 x 800 and somewhat lines can be anything that nonstandard format of the so again this is just reiterating on how progressive and interlaced scanning this is what comes out the camera progressive and this is what the two fields of an interlaced scan look like in the same scene so far refrained grabbing on this will be captured from progressive scan camera and these two fields would each be captured from a frame grabber is fields in joint back to a frame so with interlaced scanning spatial resolution is reduced temporal resolution resolution over time is improved twice as many full images are presented per second with progressive scanning sharper images are formed with interlaced sulci smoother motion but also no centerline images in the image just like described on watching TV there I told you several things came back to the old days of TV and this is on image format is one of the things that we get a lot of questions on and the truth is comes back to the old days at TV a lot of people were confused by the fact that that optically whether were talking to the optics people about lenses of whether were talking to the camera people about the optical format of sensor in the camera lot of people confused by the fact that I wanted sensor has no bearing to 1 inch in physical size happened since her has no bearing to half-inch in physical size or doesn’t the truth is a comes back from the old days when there were neither CCD or CMOS sensors on TVs were action TDs rushing made into just like the tubes that used to protect the TV on to your display on these three collected and they had a photosensitive area N/A electronic tube with a magnetic deflection yoke around it so what happened is what became 1 inch format was literally based on the physical dimensions of the 1 inch two and this was how much form photosensitive area fit inside the deflection yoke of a 1 inch tube in 1942 deaths we got 1 inch format and released two thirds half-inch one third even one quarters they came along and so when we say 1 inch format all this machine vision people at the distance of 43 aspect ratio being 12.8 mm x 9.3 or 16 mm on the diagonal and again lot of users and are confused as they think 1 inch format there expecting 25 mm diagonal are 25 horizontal or some something related in fact again this is the photosensitive area fit in the 1 inch to be the other thing that’s interesting to note here that OPI p.m. and I began trying say that on which is the metric equivalent in Europe recommended use metric designation so when you see that the IPM designation you’ll note it actually matches the diagonal the format and like a lot of things that makes more common sense than the metric system villas that will talk real quick about is when smelt seamount in CS mount for machine vision cameras on we do see other amounts nowadays there are large amounts of balance in 42 and something’s come along with the very large imagers with the most common male on machine vision cameras is still by far seamount and within that the substandard tackle CS mount and this is a diagram showing what’s happening with wins as a standard seamount camera so you have a focal plane and a focal plane is inside the camera separated by some distance to the female threads on the front the camera to which the lens threads into that distance for standard seamount is 17.526 mm CS mount then simply means that it is exactly 5 mm shorter on that back flange and it is only 12.526 mm CS mount taxi stands for was always told that it was security in the reason this is important is in the reason that it comes around for back flange is when you want a very wide angle lens than if we place laid the back of the lens closer to the focal plane makes it much easier to create it so this is why still CCS mount lenses out there and very white applicant wide-angle applications especially security then they abate for this type amount but we also find that couple calls a lot of confusion for customers guzzle of a CS mount camera and a seamount lens or vice versa the CS mount lenses seamount camera not sure what to do since it’s only physical difference of 5 mm then you have to think of is only in a physical sense so we can had a 5 mm we can take a 5 mm adapter ring any seamount lens and we can put it on a seam CS mount camera CS mount camera expected only 12 ½ mm too short lenses designed by 17 so we can have a ring and get the right distance however what gets people is you cannot do the reverse of this because you cannot take it 5 mm adapter ring any CS mount lens and use in a seamount camera is a seamount camera was designed for 17 mm and you would need to remove not had 5 mm to make the CS mount lens work unless you’re taking your camera to a machine shop you’re not when you use the CS mount lens on your seamount camera the other thing one mention is these distances that we give our call our given in free air: effective distance this is the physical distance if it were a single lens in free air the cameras often have glass filters in between the perk protective windows on the imagers sell all you have filters for the manufactures color filters in color cameras and sometimes users actually screw filters in the bottom of seamount present but between the lens and the bottom of the threat on this is okay because you can adjust back to that 17 on .526 ideal primer that means again did the effective distance is always 17.526 physical distance maybe longer if I put piece of glass in there because a glass slows down the light relative to the speed of light in error and that causes the distance the lengthens slightly that all the calculations would still be based on 17 526 so in summary the number of photons hitting a pixel during exposure time creates a number of electrons in the pixel well which formally charge is converted by passer into voltage or read down the face of the CCD TV amplifies and then digitized resulting a digital gray value for that pixel CCD then is high image quality is typically lower speed CMOS lenses the higher speed the lower image quality and again pay attention to whether you have a global war rolling shutter situation and don’t use rolling shutter for motion this concludes section 1 of the presentation

Search AIA:


Browse by Products:


Browse by Company Type: