How FPGA makes the combination of video coding and AI very simple

Aupera is a start-up company focusing on next-generation system solutions for video data applications. Dr. Narges Afsham, senior AI engineer at Aupera, said in an interview with LiveVideoStack that integrating video coding and AI in FPGA is a matter of course.

LiveVideoStack: Regarding the video cloud service market, what do you think are the differences at home and abroad?

Narges Afsham: We can see the development of the live streaming and short video editing market in the past few years. China has an unparalleled number of users and faces the video cloud service market. Accordingly, it poses more challenges and complexity, including higher concurrency and more Various terminal equipment and more complicated network conditions, etc. In addition, the understanding of video content is that China must have the ability to run video cloud services, whether it is using human resources or technical resources. Although we found that big players such as Facebook and Instagram are rushing to provide more and more related video services, including Facebook live broadcast, Facebook on-demand, and their latest multiplayer AR video games, all of which require video cloud service providers When dealing with massive video streams and data, it continues to break through technical barriers and bottlenecks. I personally think that China has more user groups to push video cloud service providers to continuously improve their services. Video cloud service providers focus on meeting the needs of large-scale users and provide more optimized services, while U.S. video cloud service providers are providing video Related new applications and technologies are more innovative.

LiveVideoStack: Could you please introduce the application situation of the video processing system platform Aup2600 series jointly developed by Aupera and Xilinx?

Narges Afsham: The Aup2600 series is a new generation video processing system platform developed by Aupera based on Xilinx's latest MPSoC FPGA equipment. The system architecture is based on a distributed computing architecture to design and optimize video processing, breaking through the CPU bottleneck in processing large-scale video streams. . For live streaming transcoding and audio and video mixed streaming tasks, compared with traditional X86 servers, the system achieves 20-30 times the efficiency of concurrent traffic processing. More importantly, the system can be upgraded to realize video AI-related applications without changing the hardware and adding, which is what makes it exciting. System FPGA resources have been reserved for AI practice, and only a small part of it is needed to implement video coding and decoding tasks. You can imagine the potential contained therein. Through our system, we can realize the real-time video content understanding of reality, and the video codec and AI application can be completed in the same chipset.

LiveVideoStack: Why can video encoding/transcoding and AI applications be combined on FPGA? What are the specific application scenarios?

Narges Afsham: Video engineers will benefit from hardware-accelerated codec and its dynamic and efficient QP allocation. Machine learning engineers benefit from the inference speed of algorithms implemented in hardware. It is very natural to unify them in FPGA, which also greatly reduces Development cycle.

LiveVideoStack: Are there any unique advantages of Aupera's FPGA-based video codec fusion AI solution?

Narges Afsham: Greatly improve the efficiency of large-scale and massive video parallel processing, and improve video quality and visual effects. More importantly, the system upgrades to achieve AI-related applications without adding or modifying hardware, and can achieve video streaming. Real-time analysis.

LiveVideoStack: AI mainstream chips for the public in the industry, in addition to CPU, there are GPU, FPGA and ASIC. We all know that FPGAs are more flexible than ASICs and more efficient than general-purpose CPUs and GPUs. More and more companies choose FPGA as their AI computing platform, such as Microsoft, Amazon, Baidu, etc. Google's recently exposed TPU dedicated to AI deep learning computing is actually an ASIC. In this regard, what do you think of FPGA+AI, and ASIC or general-purpose CPU+AI?

Narges Afsham: All of this is based on the specific requirements of the application. GPU, CPU, and FPGA differ in latency, energy efficiency ratio, development time, and even chip size. GPU has strong floating point calculations and strong design flexibility.

Xilinx MPSoC integrates CPU and FPGA. Compared with ordinary FPGAs, it has higher flexibility and more powerful functions. It is especially suitable for large-scale applications in data centers and low-power applications at edge nodes.

At the same time, we will all witness the rapid development of machine learning algorithms and networks, which are changed or updated almost every day. In terms of flexibility and configurability, FPGAs have much better performance than ASICs.

Obelisk Series Vape

Applause for Obelisk 65, the palm-fit tech-chic gathers GEEKVAPE's most cutting-edge technology.
2-day-use battery life. Stable output with built-in battery.
Easy-to-use pod with leakproof design.
And this tech-chic-looking device fits right in your palm.
Express yourself with the chicest and the latest!
For up to 48 hours, Obelisk 65 can run on a single charge. It still
delivers the same power performance whether the battery is
fully charged or not.


geekvape obelisk series box mod,geekvape obelisk series pod kit,geekvape obelisk series vape kit,geekvape obelisk series mods,geekvape obelisk series disposable

Ningbo Autrends International Trade Co.,Ltd. , https://www.mosvapor.com

Posted on