Google on Thursday announced a handful of updates to its Search function, touting that it has implemented artificial intelligence (AI) and machine learning to improve the user experience.
Users can now hum, whistle, or sing a melody to Google via the mobile app by tapping the mic icon and saying, “What’s this song?” or by clicking the “Search a song” button.
Humming for 10-15 seconds will give Google’s machine learning algorithm the chance to match the song.
The feature is currently available in English on iOS, and in around 20 languages on Android, with more languages coming to both platforms in the future, Google said.
The search giant’s AI updates also span spelling and general search queries.
This includes a new spelling algorithm that uses a deep neural net, which Google claims has significantly improved its ability to decipher misspellings.
In a blog post, Google said it wanted to focus on spelling updates as one in 10 queries are misspelled.
“In fact, this single change makes a greater improvement to spelling than all of our improvements over the last five years,” the blog says.
“A new spelling algorithm helps us understand the context of misspelled words, so we can help you find the right results, all in under 3 milliseconds.”
Similarly, Search will now index individual passages from pages to allow the specific location of a searched block of text to be located.
Neural nets have also been applied to understand subtopics around an interest, which Google said would help deliver a greater diversity of content when users make broad searches.
“As an example, if you search for ‘home exercise equipment’, we can now understand relevant subtopics, such as budget equipment, premium picks, or small space ideas, and show a wider range of content for you on the search results page,” the company said, noting the feature will start rolling out by the end of the year.
After kicking off the Data Commons Project in 2018, which brought data sets from the likes of the US Census, Bureau of Labor Statistics, and the World Bank, Google has put them to work, making the information more accessible and useful through Search.
“Now when you ask a question like ‘how many people work in Chicago’, we use natural language processing to map your search to one specific set of the billions of data points in Data Commons to provide the right stat in a visual, easy to understand format,” the blog explained.
Google also noted its BERT language understanding systems were now used in almost every query in English, which it said has helped users receive higher quality results for questions.
Using AI, Google said it could now understand the deep semantics of a video and automatically identify key moments.
This feature, which is currently being tested, will see moments be tagged in a video.
Google also announced new ways to use Google Lens and augmented reality while learning and shopping.
Searching by taking a photo or screenshot was already available, but now users can tap and hold an image on the Google app or Chrome on Android and Lens will find the exact or similar items, and suggest ways to style it.
The feature is coming soon to the Google app on iOS.
Lens can now recognise 15 billion things to help identify things like plants, animals, and landmarks. It can also translate over 100 languages.
Touching on misinformation, Google announced a handful of improvements to the way it handles coronavirus-related queries, including the “busyness status” of a particular place.
“Live busyness updates show you how busy a place is right now so you can more easily social distance, and we’ve added a new feature to Live View to help you get essential information about a business before you even step inside,” Google said.
COVID-19 safety information will now be placed on business profiles across Google Search and Maps.
“This will help you know if a business requires you to wear a mask, if you need to make an advance reservation, or if the staff is taking extra safety precautions, like temperature checks,” it added.