At I/O ’19: Building a more helpful Google for everyone

Today, we welcomed thousands of people to I/O, our annual developer’s conference. It’s one of my favorite events of the year because it gives us a chance to show how we’re bringing Google’s mission to life through new technological breakthroughs and products.

Our mission to make information universally accessible and useful hasn’t changed over the past 21 years, but our approach has evolved over time. Google is no longer a company that just helps you find answers. Today, Google products also help you get stuff done, whether it’s finding the right words with Smart Compose in Gmail, or the fastest way home with Maps.

Simply put, our vision is to build a more helpful Google for everyone, no matter who you are, where you live, or what you’re hoping to accomplish. When we say helpful, we mean giving you the tools to increase your knowledge, success, health, and happiness. I’m excited to share some of the products and features we announced today that are bringing us closer to that goal.

Helping you get better answers to your questions

People turn to Google to ask billions of questions every day. But there’s still more we can do to help you find the information you need. Today, we announced that we’ll bring the popular Full Coverage feature from Google News to Search. Using machine learning, we’ll identify different points of a story—from a timeline of events to the key people involved—and surface a breadth of content including articles, tweets and even podcasts.

Sometimes the best way to understand new information is to see it. New features in Google Search and Google Lens use the camera, computer vision and augmented reality (AR) to provide visual answers to visual questions. And now we’re bringing AR directly into Search. If you’re searching for new shoes online, you can see shoes up close from different angles and even see how they go with your current wardrobe. You can also use Google Lens to get more information about what you’re seeing in the real world. So if you’re at a restaurant and point your camera at the menu, Google Lens will highlight which dishes are popular and show you pictures and reviews from people who have been there before. In GoogleGo, a search app for first-time smartphone users, Google Lens will read out loud the words you see, helping the millions of adults around the world who struggle to read everyday things like street signs or ATM instructions.

Google Lens: Urmila’s Story

Google Lens: Urmila’s Story

Helping to make your day easier

Last year at I/O we introduced our Duplex technology, which can make a restaurant reservation through the Google Assistant by placing a phone call on your behalf. Now, we’re expanding Duplex beyond voice to help you get things done on the web. To start, we’re focusing on two specific tasks: booking rental cars and movie tickets. Using “Duplex on the Web,” the Assistant will automatically enter information, navigate a booking flow, and complete a purchase on your behalf. And with massive advances in deep learning, it’s now possible to bring much more accurate speech and natural language understanding to mobile devices—enabling the Google Assistant to work faster for you.

We continue to believe that the biggest breakthroughs happen at the intersection of AI, software and hardware, and today we announced two Made by Google products: the new Pixel 3a (and 3a XL), and the Google Nest Hub Max. With Pixel 3a, we’re giving people the same features they love on more affordable hardware. Google Nest Hub Max brings the helpfulness of the Assistant to any room in your house, and much more.

Building for everyone

Building a more helpful Google is important, but it’s equally important to us that we are doing this for everyone. From our earliest days, Search has worked the same, whether you’re a professor at Stanford or a student in rural Indonesia. We extend this approach to developing technology responsibly, securely, and in a way that benefits all.

This is especially important in the development of AI. Through a new research approach called TCAV—or testing with concept activation vectors—we’re working to address bias in machine learning and make models more interpretable. For example, TCAV could reveal if a model trained to detect images of “doctors” mistakenly assumed that being male was an important characteristic of being a doctor because there were more images of male doctors in the training data. We’ve open-sourced TCAV so everyone can make their AI systems fairer and more interpretable, and we’ll be releasing more tools and open datasets soon.

Another way we’re building responsibly for everyone is by ensuring that our products are safe and private. We’re making a set of privacy improvements so that people have clear choices around their data. Google Account, which provides a single view of your privacy control settings, will now be easily accessible in more products with one tap. Incognito mode is coming to Maps, which means you can search and navigate without linking this activity with your Google account, and new auto-delete controls let you choose how long to save your data. We’re also making several security improvements on Android Q, and we’re building the protection of a security key right into the phone for two-step verification.

As we look ahead, we’re challenging the notion that products need more data to be more helpful. A new technique called federated learning allows us to train AI models and make products smarter without raw data ever leaving your device. With federated learning, Gboard can learn new words like “zoodles” or “Targaryen” after thousands of people start using them, without us knowing what you’re typing. In the future, AI advancements will provide even more ways to make products more helpful with less data.

Building for everyone also means ensuring that everyone can access and enjoy our products, including people with disabilities. Today we introduced several products with new tools and accessibility features, including Live Caption, which can caption a conversation in a video, a podcast or one that’s happening in your home. In the future, Live Relay and Euphonia will help people who have trouble communicating verbally, whether because of a speech disorder or hearing loss.

Project Euphonia: Helping everyone be better understood

Project Euphonia: Helping everyone be better understood

Developing products for people with disabilities often leads to advances that improve products for all of our users. This is exactly what we mean when we say we want to build a more helpful Google for everyone. We also want to empower other organizations who are using technology to improve people’s lives. Today, we recognized the winners of the Google AI Impact Challenge, 20 organizations using AI to solve the world’s biggest problems—from creating better air quality monitoring systems to speeding up emergency responses.

Our vision to build a more helpful Google for everyone can’t be realized without our amazing global developer community. Together, we’re working to give everyone the tools to increase their knowledge, success, health and happiness. There’s a lot happening, so make sure to keep up with all the I/O-related news.

At I/O '19: Building a more helpful Google for everyone

Source: Official Android Blog