Free Shipping on orders over US$39.99 How to make these links

A Google DeepMind AI language model is now making descriptions for YouTube Shorts

Google just combined DeepMind and Google Brain into one big AI team, and on Wednesday, the new Google DeepMind showed how to use one of its visual language models (VLMs) to generate descriptions for YouTube shorts. Has been, shared details on it, which may help in the search.

DeepMind wrote in the post, “Shorts are created in a matter of minutes and often don’t include descriptions and helpful titles, making them difficult to find via search.” Flamingo can analyze the early frames of the video and build on those details to explain what’s happening. (DeepMind gives the example of “a dog balancing a stack of firecrackers on its head”.) Text descriptions will be stored as metadata to “better categorize videos and optimize search results for viewer queries”. can be matched.”

It solves a real problem, explains Colin Murdoch, chief business officer at Google DeepMind. ledge: For short, creators sometimes don’t add metadata because the process of creating a video is more streamlined than a long format video. Todd Sherman, director of product management for Shorts, said that because Shorts are mostly watched on a feed where people are swiping to the next video rather than actively browsing, there isn’t as much incentive to add metadata.

“This Flamingo model – the ability to understand these videos and provide us with descriptive text – is really very valuable to help our systems that are already looking for this metadata,” Sherman says. “It allows them to more effectively understand these videos so that we can match users to what they’re looking for.”

The generated details will not be visible to the user. “We’re talking about metadata that’s behind the scenes,” says Sherman. “We don’t attribute this to the creators, but a lot of effort is put into making sure it’s accurate.” As Google is making sure these descriptions are accurate, “all descriptive text is going to align with our accountability standards,” Sherman says. “It is highly unlikely that a descriptive text is generated that somehow makes the video appear in poor light. This is not the result we hope for.”

Flamingo is already implementing auto-generated descriptions for new short uploads

Flamingo is already implementing auto-generated descriptions for new shorts uploads, according to DeepMind spokesman Duncan Smith, and has done so for “a large collection of existing videos, including most viewed videos”.

I had to ask if Flamingo would be implemented on longer playing YouTube videos. “I think it’s entirely conceivable that it could happen,” says Sherman. “I think the need is probably a little less, though.” He notes that for a longer video, a producer can spend hours on things like pre-production, filming, and editing, so adding metadata is a relatively small part of the video-making process. And because people often watch long-form videos based on things like the title and thumbnail, creators have an incentive to add metadata that helps to be discovered.

So I guess the answer is we’ll have to wait and see. But considering Google’s major push to introduce AI into almost everything, implementing longer YouTube videos like Flamingo doesn’t seem out of the realm of possibility, which could have a huge impact on YouTube search in the future.

We will be happy to hear your thoughts

Leave a reply

Dont guess! Start Ghech!
Compare items
  • Total (0)
Shopping cart