The offline solution is used for light-weight interaction. Compared with the cloud transmission method, it achieves lower power consumption, lower latency and lower cost.

Brain-like computing is an emerging computing method, because it can greatly improve real-time data processing, machine online learning capabilities, and achieve smaller energy consumption and volume. It is considered to be able to lead artificial intelligence and computer miniaturization. One stage. Intel, IBM, Qualcomm and other giants have invested a lot of research and development in recent years.

However, because the technology is still in an early stage, product maturity and landing scenarios are still in the exploration period. Most of the common chips on the market are general-purpose chips, and they are mainly used in scientific research. No mass-produced products have appeared yet. In addition, due to the large amount of research and development concentrated in the field of sensors, the development progress of brain-based processors is slow, which is also The degree has affected the landing of technology.

Recently, there has been a company that has entered the special field and developed a smart vision sensor SoC based on brain-like computing, and will deliver the development module to customers in June 2020, SynSense. The company was established in 2017 in Switzerland, backed by the famous research institutes Zurich University and Zurich Federal Institute of Neuroinformatics, with brain-like technology as the main line, focusing on edge computing, developing ultra-low power, low-cost edge computing processors and intelligence Sensors provide complete solutions including IP authorization, hardware design, software configuration, and algorithm development for smart homes, robots, intelligent security, autonomous driving, drones, and other fields.

About why to start from the visual direction and cut into the consumer scene. SynSense founder Qiao Ning said: The smart home market has grown rapidly in recent years, and corresponding smart sensors are also being deployed in large numbers. Vision is a very important information element. Unlike low-dimensional information such as sound, it is more difficult to obtain and process. In addition to high, the industry is not enough for the research and development of special SoC for vision, and it is in a blank period in the direction of brain-like computing.

In addition to adopting the brain-like computing method; SynSense will soon release the SoC chip contains a complete solution of sensor + processor , which solves the above-mentioned class The long-term lack of matching sensors on the brain sensor complements the industry’s shortcomings. In addition, the product adopts the working mode of offline , which is very different from the common cloud processing mode in the industry. This actually reflects the difference in the focus of the solution, specifically:

  • The cloud processing mode mainly solves the processing of massive data. With the improvement of 5G and algorithmic computing power, cloud-based solutions are being used on a large scale, but the disadvantages are: on the one hand, cloud collection The information received is large but easy to be redundant.The requirements are very high; in addition, the data is uploaded from the terminal to the cloud, and the intermediate transmission and processing processes also make the power consumption of the device relatively large, thereby increasing the overall cost of the solution.
     

  • SynSense adopts an offline processing solution, which mainly cuts into the requirements of human-computer interaction scenarios for real-time response, for example, how short a complete continuous gesture can Very low power consumption completes processing. At present, SynSense’s complete vision solution can achieve sub-milliwatt power consumption and millisecond response; at the same time, this lightweight device can achieve lower cost and more efficient man-machine cost than cloud solutions Real-time interaction.

    At present, SynSense’s main business is divided into two directions: one is visual signals, ultra-low power consumption with dynamic camera input, ultra-low latency real-time dynamic image processing and intelligent applications, and the main application scenario is smart home , Robots and intelligent security; second, ultra-low power real-time processing of natural signals such as body signals and voice, which can be used in mobile phones, health monitoring and industrial machinery.

    At present, this smart vision sensor SoC has been put into production and tested successfully. The corresponding development kit will be delivered to customers in June and mass production will be achieved by the end of the year. SynSense has now set up hardware and chip R & D teams in Chengdu and Shanghai: Europe is mainly responsible for the underlying IP and algorithm R & D, while China is responsible for system integration, engineering, and marketing. The company has recently completed nearly RMB 100 million in Series A financing.