SproutNews logo

Moviebook Launches MAPE,a Breakthrough in Full-stack Layout, to Facilitate the Iteration of Video Technology

NEW YORK, NY / ACCESSWIRE / March 29, 2019 / The new Moviebook video technology services automatically generate new video content after comprehending available content with the help of a combination of AI offerings

Based on the AI+Video offerings – MAPE, Moviebook has recently launched several AI application components that are specially designed for certain industries, covering a full-stack layout of intelligent image production technologies, platforms, and industry applications.

Designed for applications in the pan-entertainment field, MAPE provides three sets of AI technology components: MACS (Auto-Short Video Content), MALF (Lightweight Film Industry) and Information to Video. By employing intelligent analysis, comprehending content and auto-generating new video content, it will facilitate the iteration of video technology, and help M&E companies stay competitive.

Targeting media platforms, MAPE has launched MAAM (Auto-All media), Auto-Standard Video, Video Analysis and Intelligence, covering intelligent native content, content creation, content revision, news to video, and other applied fields.

The auto-production engine combines AI with interactive video technology, and proposes a complete technical framework that includes depth-of-field estimation backstepping technology and sub-pixel anti-track technology, video overlay technology, optimization computing technology and so on. With three AI components, which are MCVS, Auto-structuring Video, and AGC (Auto-Generating Video Content), it can help M&E companies tap into unstructured data and make well- informed decisions about the content they create, acquire and deliver to viewers.

The new service offerings can effectively supplement the existing video content development and application functions in the industry. Based on structured and unstructured video data, MAPE adopts AI kernel to help customers comprehend their video data, analyzes and collects insights from the data, and then obtains insights, structures, emotions and visual analyses from videos. A video is firstly segmented into logical scenes according to pictures and semantic cues in the content. Then based on a deeper understanding of the content and context, it can identify scenes, and generate available content and context automatically, thereby achieving automatic video production.

For example, MAPE provides a new AI technology MAAM for media platforms. Media clients can rely on the big data platform, make use of news-to-video model technology, and present multi-dimensional data in many forms, such as data map, time line, bubble map, interactive chart, and individual relationship map, so as to realize news-to-video processing and production. In addition, it can explore innovative reporting modes such as virtual studio and virtual host. Through VR, AR, face feature extraction, face reconstruction, emotional transfer and other frontier technologies, it can make innovative presentation forms and interaction modes of news content.

The smart media program of Moviebook was also employed to generate video content based on semantic scenes in a Chinese press conference, which introduced activity scene in a visualized and intelligent manner, and present holographic images that change with semantics, expressions and gestures in a timely way. While introducing the scene in a simple and understandable way, the video is also more interesting than common videos.

In addition, Automated Video Production MAGC can help M&E companies better manage their content libraries and new machine auto-generating content libraries. For example, users want to prioritize stories about world adventures. To meet this need, MAGC can be used to help the company analyze its historical content library according to specific details, automatically generate new content that meets this particular need, and seamlessly plant ads in the process, which may be completed within a second.

“We are seeing that the dramatic growth in multi-screen content and viewing options is creating an urgent need for M&E companies to transform the way content is developed and delivered to address evolving audience demand.” The person in charge of Moviebook MAPE said, “Today we are creating new video cognitive solutions to help M&E companies to thoroughly improve video content development and video data applications so that customers with poor system engineering or machine learning skills can also quickly analyze and gain insights from large amounts of unstructured data.”

Based on Moviebook’s previous project implementation experience, M&E industry experience and other knowledge, the new MAPE service provides a guarantee for customers’ applications. Last year, MAGC was employed to create a “native video clip”, which works by enabling machines to understand and identify effective content from the scenes of a variety show, identify relevant scenes in other video content, and ultimately generate a brand new video. The video was successfully broadcast in the show.

yangym@huajugr.com

Source: Moviebook

ReleaseID: 540511

Go Top