Current News Today

- Advertisement -

- Advertisement -

Microsoft says its AI can describe images ‘as well as people do’


It’s not unusual to see companies tout their AI research innovations, but it’s far rarer for those discoveries to be quickly deployed to shipping products. Xuedong Huang, CTO of Azure AI cognitive services, pushed to integrate it into Azure quickly because of the potential benefits for users. His team trained the model with images tagged with specific keywords, which helped give it a visual language most AI frameworks don’t have. Typically, these sorts of models are trained with images and full captions, which makes it more difficult for the models to learn how specific objects interact.

“This visual vocabulary pre-training essentially is the education needed to train the system; we are trying to educate this motor memory,” Huang said in a blog post. That’s what gives this new model a leg up in the nocaps benchmark, which is focused on determining how well AI can caption images they have never seen before.

But while beating a benchmark is significant, the real test for Microsoft’s new model will be how it functions in the real world. According to Boyd, Seeing AI developer Saqib Shaik, who also pushes for greater accessibility at Microsoft as a blind person himself, describes it as a dramatic improvement over their previous offering. And now that Microsoft has set a new milestone, it’ll be interesting to see how competing models from Google and other researchers also compete.



Read More:Microsoft says its AI can describe images ‘as well as people do’

Get real time updates directly on you device, subscribe now.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.