Accelerating the launch of AR/VR applications

Today, computer vision startup “Interpretation Technology” announced the completion of a new round of 10 million Pre-A round financing, this round of financing led by Junsheng Investment, Suzhou Zhilang followed. It is reported that this round of financing will be used for completion from the core The construction of technology to system integration capabilities, the construction of talent teams, and the layout of specific vertical areas.

Interpretation Technology is a computer vision company that is continuously reporting on the development of the underlying technology of spatial perception and cognition (vSLAM+AI). It was established in Silicon Valley in 2016 and won the angel investment of Zhongke Chuangxing in 2017. Landing in Shanghai, currently with subsidiaries in the US and Europe, focusing on providing vSLAM+AI visual edge computing platform solutions for AR/VR/MR, robotics, sweepers, drones and more.

Introduction to Technology, founder Lin Qiong graduated from the University of Southern California and Tsinghua University, and has been engaged in image sensing technology for 15 years. He is a senior computer vision system architect and has worked as an image sensor company such as Toshiba, APTINA, and Anson.

At the product level, the current interpretation technology based on Intel Movidius VPU processor-based multi-mode fusion visual edge computing platform, dedicated to the AR / VR field and mobile robot field respectively introduced 3D XR Vision and 3D Robot Vision series of in- Side-out 6DOF tracks VSLAM module-level products.

Since its inception, Interpretation Technology has achieved in-depth technical cooperation with many industry benchmarking companies including Dreamworld, Pacific AR, Nedjia, Confucius, and Xenon Dimension NXG. Excessive to the business model level of system integration, commercialization achieved initial results.

The following is a report on Interpreter Technology in January 2019:

2016 is the first year of the AR/VR/MR industry. The development of the entire industry is also following the Hype Cycle of the new technology. It has experienced prosperity, bubbles, low valleys and the stage of rational growth. With the maturity of 5G technology, the AR/VR industry has become prosperous in the 2019 that has just arrived. On January 24th, the Intel RealSense business unit released a new deepDegree camera T265 provides 6DoF inside-out tracking for AR/VR head display, robot and drone. This release means that RealSense is moving forward from deep detection to full-featured implementation of vSLAM, and the machine vision industry is maturing.

Shanghai Vision Technology Co., Ltd. (Xvisio Technology, hereinafter referred to as Vision Technology) is a computer vision startup company The vSLAM+AI end processing computing platform is known. It was selected as one of the first 12 innovative companies in Intel’s “Intel Artificial Intelligence Partner Innovation Incentive Program”, which aims to identify the actual pain points of artificial intelligence by working with innovative companies to uniquely solve different scenarios. The plan fell to the ground. The multiple layouts of industry giant Intel may also indicate that the core underlying technology of space awareness and human-computer interaction vSLAM is about to usher in the opportunity of transition from algorithm to product.

At the CES2019 show that ended recently, Interview Technology won the CES2019 Innovation Awards Innovation Award. Its award-winning product eXLAM-80X is an in-side-out 6DOF tracking vSLAM module for XR, robots and drones. Group level products. Interview with Lin Qiong, CEO of Interpretation Technology, “It is a multi-mode fusion visual edge computing platform based on Intel Movidius VPU processor. It combines multiple sensors with high-speed binocular vSLAM to ensure 100fps refresh rate and high resolution. In the case of rate and tracking accuracy, it not only achieves multi-functional end face fusion such as positioning, mapping, obstacle avoidance, etc., but also guarantees the robustness of the system. The platform not only provides powerful edge computing capabilities, but also has The flexibility of choosing the appropriate sensor integration solution for a specific application. At the same time, the feature-rich SDK enables advanced functions such as depth detection, plane detection, spatial overlay, 3D reconstruction, gestures and object recognition for VR/AR/ Rapid and easy integration of MR devices and robots reduces development costs and speeds time to market.”

At the industry level, vSLAM is a hot word in recent years, from Hololens to Magic Leap, all relying on this technology, from autopilot to sweeping robots. The combination of vSLAM and AI is the only way to solve environmental perception and cognition. However, due to the high requirements of the algorithm, the industry chain is long, and the R&D threshold is high. Only a few industry giants master the underlying core technologies, and more small and medium-sized enterprises that do not have the independent research and development capabilities of vSLAM are blocked out. Technology is concerned with this extensive long tail market.

“We have more than ‘50 person years’ (one person represents one year)The industry-leading high-speed VSLAM algorithm development and hardware implementation experience accumulated by one year of experience and time accumulation is the strategic partner of Intel Movidius’s ecological chain. In early 2018 we introduced a high-speed VSLAM module based on the Movidius VPU. The first is to solve the problem of environmental awareness and human-computer interaction such as high-precision In-side-out 6DOF tracking, mapping, obstacle avoidance and object recognition in the AR/R, robot and UAV fields, and achieve a refresh rate of 100. Frames per second, the highest precision and minimum power can reach module-level products of millimeters and 1.5 watts, respectively. Many AR and home robot customers have adopted our solutions. It solves the bottleneck of platform computing power and rapid marketization. In November 2018, Interpretation Technology was selected as a key breakthrough technology enterprise in the Internet of Things of the Ministry of Industry and Information Technology. Lin Qiong added, “We are currently providing customers with more than just a vSLAM edge computing platform. It also has AI expansion capabilities and traditional CV capabilities for users to further develop functions, and all these features and possibilities Sex is integrated into a small end device. The mass production of the eXLAM-80X is our best practice for the visual edge computing platform product strategy.

It is understood that Interpretation Technology has established cooperation with many companies in the fields of XR, service robots, sweeping robots, etc., through an independent vSLAM sensor computing unit to help customers solve the problem of insufficient system power, this will involve super The long-term industrial chain positioning and interaction solutions are concentrated in a plug-and-play module-level terminal device, allowing customers to quickly enter the market. For example, in the field of XR, its advantages are highlighted in solving the problem of edge interaction in the XR split head form. In the field of robotics, when vSLAM is combined with AI on the end device, it shows its unique differentiated value, while giving the robot the ability to perceive and recognize. The realization of these application innovations is also related to the ability to interpret the customization of technology. According to Lin Qiong, the interpretation has helped many customers realize the application of AR technology in the fields of intelligent manufacturing and education.

Computer Vision's

3D Robot Vision

Computer Vision Company

3D XR Vision

Innovation Technology was founded in Silicon Valley in 2016. In 2017, it was invested by Angel Celebrity and invested in Shanghai. It currently has subsidiaries in the US and Europe. This is a computer vision company engaged in the development of the underlying technology of spatial perception and cognition (vSLAM+AI), focusing on providing vSLAM+AI visual edge computing platform to AR/VR/MR, robotics, sweepers, drones and other fields. solution. Founded by the University of Southern California and Tsinghua University, the founder Lin Qiong has been engaged in image sensing technology for 15 years. He is a senior computer vision system architect and has worked as an image sensor company such as Toshiba, APTINA, and Anson.

The company currently has a total of 30 people. Among the team members are Nokia and Lucent pre-technical executives who have focused on low-power embedded system development, 20 years of experience in signal and image processing, and more than 10 researches on vSLAM algorithms. Year of the Doctor of Robotics. The company will be mass-produced in mid-March 2019 and is currently in the PreA financing phase. The financing amount is used to promote technology-to-product conversion, mass production of products, market promotion and development of new products.

“Let the machine understand the world, use machine vision and artificial intelligence to empower humans”, this is the vision of the interpretation of the technology team, and is the common goal of all practitioners in the industry.