Mark Zuckerberg Tests Meta’s AI Model On Old Video Of Him Playing Guitar

0
15
Mark Zuckerberg Tests Meta’s AI Model On Old Video Of Him Playing Guitar


Last Updated: February 17, 2024, 17:31 IST

V-JEPA is a predictive evaluation mannequin that learns solely from visible media. (Photo Credits: Instagram)

Mark Zuckerberg confirmed Meta’s newest synthetic intelligence (AI) mannequin by sharing an older video of him taking part in the guitar and singing for his daughter, Maxima.



Mark Zuckerberg, the co-founder, and CEO of Meta (previously Facebook) lately confirmed Meta’s newest synthetic intelligence (AI) mannequin by sharing an older video of himself taking part in the guitar and singing for his daughter, Maxima. The video was posted on Instagram, the place Zuckerberg shared a throwback clip of him taking part in a tune on the guitar. In the caption, he talked about testing the video on the AI mannequin named V-JEPA, described on its website as a “non-generative model that learns by predicting missing or masked parts of a video in an abstract representation space.”

Zuckerberg’s caption learn, “Throwback to singing one of Max’s favorite songs. I recently tested this video with a new AI model that learns about the world by watching videos. Without being trained to do this, our AI model predicted my hand motion as I strummed chords. Swipe to see the results.”

The submit included two movies. The first confirmed him singing and taking part in the guitar with Maxima, whereas the second demonstrated the AI mannequin’s prediction of his hand actions whereas taking part in.

Watch the video right here:

Despite being posted only a day in the past, the video has already garnered over 53,000 likes.

V-JEPA, or Video Joint Embedding Predictive Architecture, is a predictive evaluation mannequin that learns solely from visible media. It not solely understands the content material of a video however can even predict what’s going to come subsequent.

To prepare V-JEPA, Meta used a brand new masking expertise. This concerned masking elements of the video in each time and house. Some frames had been solely eliminated, whereas others had blacked-out fragments. This course of compelled the mannequin to foretell each the present and subsequent frames. Meta claims the mannequin carried out this activity effectively. Importantly, the mannequin can analyse movies as much as 10 seconds in size.

In a weblog submit, Meta defined, “For example, if the model needs to distinguish between someone putting down a pen, picking up a pen, and pretending to put down a pen but not actually doing it, V-JEPA is quite good compared to previous methods for that high-grade action recognition task.”

The use of V-JEPA highlights Meta’s ongoing efforts to innovate within the discipline of AI and machine studying.



Source hyperlink