ON Semiconductor IntelliSense technology-driven cars, machine vision, artificial intelligence development edge

Image sensor, depth perception, sensor fusion, three trends are perceived future development, and automotive, machine vision, artificial intelligence edge (Edge AI) is the focus of most IntelliSense three major application market. ON Semiconductor is the world’s only company that offers a complete perceptual model and program of the company, including ultrasound, imaging, millimeter-wave radar, laser radar (LiDAR) and sensor fusion, etc., and industry-leading technology and continuous innovation, is crucial three applications market leader in imaging solutions, is committed to provide beyond human vision. Automotive intellisense towards automatic driving programs for paving the way of the driver sight generally only the front cover 180 degrees plus the corner, may exceed 200 degrees, and the fused image sensor, ultrasonic, radar, automobiles having the LiDAR 360 awareness. Currently enabled for automatic driving of the vehicle equipped with 9 (hereinafter, may be up to 20) of the image sensor, the radar 10, at least two ultrasonic, and will also configure at least one LiDAR. In addition, the driver may have distracted the case of a few milliseconds, and IntelliSense can track all-time detection and calculation, stronger than the driver.
Semiconductor comprehensive lineup of cars IntelliSense to break down the barriers between industries, to meet the customer needs.
As a global leader in automotive image sensor, ON Semiconductor’s global market share foremost , providing visual perception + of the overall program, it is the only car with all types of cameras suppliers, complete product platform from 1.3 million to 12 million pixels pixels, universal high dynamic range (HDR) and suppression LED flashes (LFM) sensor architecture to meet the demand, and have 10 years of automotive experience ADAS solutions technology leader. In addition, the company’s Clarity + configuration options to achieve human visual perception and synchronization, is a video camera to provide visual and perceptual system integration technology, and in line with the 4th generation ISO26262 standard for functional safety (Automotive Safety Integrity Level ASIL C grade), the world’s first of its the image sensor including network security in mass production.
is essential to maximize the dynamic range of the high dynamic range (HDR) means of light and dark on a ratio of the difference image. In automotive applications, to maximize the dynamic range is essential. Typical scenarios such as a tunnel or driving at night, HDR helps machine vision orAI recognition / determination details, improve security. Semiconductor HDR imaging solutions of 140 dB, higher than competitors 120 dB. LFM technology solutions for OEMs LED flashes LFM is a very big challenge. Because the LED is flashing frequency is not the same. If the frequency of traffic lights and the camera will not catch the signal does not match. ON Semiconductor LFM technology can analyze delay (Latency) and power from the software after getting the signal, to solve the problem directly from the sensor hardware. Clarity + realization of human visual perception and synchronous core technology Semiconductor Clarity + to achieve human visual perception and synchronization. Now on the market of artificial vision using a camera, use another machine vision camera. ON Semiconductor and the use of a camera to achieve both machine vision signal while providing artificial vision signal. NIR + improve near-infrared sensitivity when driving the car at night near-infrared requirements are very high. Semiconductor near infrared + (NIR +) pixel technology increases about 4 times, near infrared sensitivity, to ensure full signal acquisition, signal to enhance clarity, further power saving.
was the world’s first global shutter global shutter technique is mainly used for industrial applications will be scanned to the car, especially in the surveillance driver fatigue. ON Semiconductor is the world’s first global shutter in the field. The first key
autopilot function is to secure the image sensor itself has failed short circuit, the signal weakens and other issues that may arise CONTROL ENGINEERING China Copyright , ON Semiconductor will inject more than 8000 kinds of chips to identify failure modes chip credibility to ensure functional safety. The so-called functional safety refers to the components of the line can self-test to provide complete and accurate information, and road conditions do not have any relationship. In addition, also we developed a safety manual for your reference. Car image perceived direction of the car image sensor market is growing rapidly, mainly used in advanced driver assistance systems (ADAS), cabin cameras, vision camera, camera surveillance system (CMS) and the autopilot. Some large trucks substituted with CMS mirror wind resistance can be reduced, 5-10% energy saving. Future car image perceived direction of development include: 1. artificial vision outside the vehicle, includingLooking around, rear view, this is for people to see; 2. The machine vision outside the car, including ADAS, autopilot; 3. monitoring the rapid expansion of the cabin, including driver fatigue monitoring, disease detection, emotional / physiological testing, security precision adjustment airbag, human-computer interaction (Human Machine interaction), iris recognition (iris Detection), and face recognition. Human-computer interaction, comprising gesture recognition CONTROL ENGINEERING China Copyright , such as cabin temperature adjustment, listen to the radio, call and so can be operated with the gesture. In addition to monitoring the driver, as well as monitoring of the occupant, such as a robot taxi (robot taxi), many functions need to monitor the cabin, including seat-belt reminder, presence of children detection, object detection and pet detection. This is the future of the autopilot to be involved in some applications. Fusion with the millimeter-wave radar autopilot L2, L3, L4, and automatic driving demand, long, medium, short range millimeter wave radar as it costs more and more demand in automotive applications will be more to more. Semiconductor NR4401 market with the first 4 sync transceiver Tx products, a single device can support short / long distance, to reduce the bill of materials (BOM) the number of devices, cost reduction, flexible configuration and expansion.
FuseOne LiDAR and imaging integration Semiconductor is the first car to achieve compliance on the market LiDAR currently. It combines megapixel sensor and flash-based automotive LiDAR 8×2 silicon photomultipliers (the SiPM), since no mechanical scanning-type radar, reduced cost hundreds of dollars to thousands of dollars from the market, so that the solid-state radar Become reality. Automotive ecosystem partners to establish partnerships car ecosystem is very important. ON Semiconductor active car-related friends, suppliers to establish cooperation ecosystem to ensure that products, product roadmap and product concept can butt can fit, the company has established a close with more than 50 image-end program partners contact. Growth of machine vision market in the wave of the rise of artificial intelligence, machine vision and industrial Edge AI increasing. In addition to traditional factory automation, intelligent factories, machine vision is also increasingly used for wisdomEnergy transportation, new retail, intelligent buildings / homes, robotics, augmented reality / virtual reality / wearable, security / surveillance and other fields. As the pixel resolution of artificial intelligence, to enhance understanding and judgment needs, the image sensor pixel identification increasingly high requirements. ON Semiconductor has both CMOS and CCD image sensor technology, with a full range of product line and leading product performance advantages, as industrial machine vision leader. XGS series in the field of machine vision far ahead in the field of machine vision, competitive advantage of ON Semiconductor’s product range has the size, frame rate, cost, performance and application support. The company launched the series has hit the industry’s only two PYTHON PCB will be able to support eight kinds of resolutions from VGA to 25 million pixels, providing high performance imaging and simplified upgrades for demanding industrial applications. XGS series is now further improve performance than series PYTHON, have high-bandwidth, low-power architecture, the global shutter using 3.2 micron CMOS designs, reducing design dimensions, to provide high performance, low noise. At the same time, a camera can support a variety of different pixel resolution and functionality for fast and flexible expansion, factory automation, ideal for applications in intelligent transportation systems (ITS), radio imaging, intelligent retail and robots. More importantly, XGS series is currently on the market only 16 million pixels in a 29 × 29mm product design size.
The latest XGS45000 4500 megapixel sensor global shutter, high-speed, low-power (<3 W),支持60 fps 8k 视频,只需一个摄像头一次曝光、一次采样就能清晰地看清目标的所有细节,用于机器视觉和高端安防极具优势。  KAI-50140 CCD图像传感器:结合高分辨率和高图像均匀性  KAI-50140是一款超高分辨率(5000万像素)CCD图像传感器,纵横比2.18:1,匹配现代智能手机显示屏,提供世界一流的图像均匀性和成像质量,实现更高效的智能手机显示屏检测,可扩展到重要的检测、监控应用如关键的生产线终端检测(平板检测)、广域监控、航拍等,可利用现有摄像机设计以简化升级。  应对边缘设备更智能、更低功耗的需求  物联网(IoT)将涉及大量的边缘设备,需要由FOG节点将处理后的数据进行聚合,进而发送到云端,然后从云端再发送最终的指令经由FOG到边缘设备。未来边缘设备的激增及添加人工智能(AI)进行分析处理的特征,要求尽可能降低功耗和更智能。  AR0430/AR0431为IoT设计,可很好地应对这需求,仅需不到70 mW就能提供400万像素、30 fps输出,其功率可扩展,在1fps时功耗不到8 mW。  安森美半导体目前正开发下一代用于Edge AI的传感器 CONTROL ENGINEERING China Copyright , including motion detection, can automatically wake up, in order to achieve cost equivalent rolling shutter a global shutter function, a frame rate of 250 FPS, ultra-low power output at 30fps is 17 mW, when not 1fps output 2.5 mW, to hide a small size, but also techniques using NIR + ensure effective nighttime imaging, typical application is surveillance cameras in the home. Super depth depth mapping technology intelligent robots need to have a sense of space, not just the perception on the plane, but also the perception of depth around the present depth map three-dimensional vision, structured light, time of flight (TOF), Super depth and radar five technologies. ON semiconductor’s Super DColor depth and image synchronization epth art single sensor, without any He Buguang for service robots, home robots around the perceived depth. SPAD array LiDAR / ToF provide highly accurate depth Semiconductor Image Sensing Technology will be fused to the development of the LiDAR, the laser radar matrixing into the image. Pandion is ON Semiconductor’s new single photo avalanche diode (SPAD) array LiDAR, with millions, hundreds of millions of sensitive gain is a matrix type. Flash solid state light source can be perceived depth within three meters, the scanning velocity control (Beam Steering), is able to perceive a depth of about 100 meters. Because it uses a dot matrix type photoreceptor, to form a matrix of 400 × 100. Different from the traditional cloud point (point cloud), Pandion image has been formed. 0.1 lux of ambient light is almost invisible to the naked eye, but you can see by Super Depth technology.
summarizes ON Semiconductor is the world’s only full IntelliSense program suppliers, including ultrasound, imaging, millimeter-wave radar, LiDAR and sensor fusion, automobile autopilot, industrial machine vision, Edge AI and other vital market leader in imaging solutions areas, with leading technology, a long tradition of innovation and continued investment in the industry to go the front of , and actively develop the ecosystem. At the same time cultivating the Chinese market, bringing together professional and technical sales personnel and customer service.

Related articles