Facebook is using AI to allow for simpler and more effective searching of pictures.
This added search functionality comes from machine learning system Lumos. Previously, captions only noted objects that were pictured; now, Lumos will read pictures and add descriptions accordingly, based on what is featured in them.
Using deep learning and a neural network, Facebook developer trained Lumos to identify objects using tens of millions of photos with the proper annotations. Descriptions can now include actions like “people walking,” “people dancing,” “people riding horses,” “people playing instruments,” and more.
This makes finding specific pictures much easier for users who may not remember who snapped the photos or when they were taken. Now, the search system can sort through this vast amount of information and bring up the most relevant photos quickly and easily.
For example, users can search “black shirt photo” and the system can “see” whether there is a black shirt in the photo and search based on that, even if that information wasn’t originally tagged in the photo. Facebook says that searches can pick up on locations, objects, animals, places, attractions and clothing items.
These improvements also help Facebook become more accessible, as the AI will be able to describe the content of images and videos to the visually impaired. To do this, the AI team gathered a sample of 130,000 public photos shared on Facebook that included people. Human annotators wrote a single-line description of the photo, as if they were describing it to someone who is visually impaired friend.
The social media giant credits its FBLearner Flow system for the changes, which Lumos was built on top of. It says Flow runs 1.2 million AI experiments per month — six times greater than what they were running a year ago.
Facebook has been investing a great deal into AI recently, including in its major partnership with companies like Google and Microsoft to research and develop the technology.