The world is a computer, filled with an incredible amount of data. By 2020, the average person will generate 1.5GB of data a day, a smart home 50GB and a smart city, a whopping 250 petabytes of data per day. This data presents an enormous opportunity for developers — giving them a seat of power, while also giving them tremendous responsibility. That’s why this morning at Build, we don’t take our jobs lightly in helping to equip these developers with the tools and guidance to change the world. On stage in Seattle, Microsoft CEO Satya Nadella is describing this new world view, fueled by AI that can power better health care, relieve challenges around basic human needs and create a society that’s more inclusive and accessible.

Helping create a better, safer, more just world is a responsibility we take seriously at Microsoft. We’ve always been committed to the ethical creation and use of technology. As AI increasingly becomes part of our lives, Microsoft’s commitment to advancing human good has never been stronger. Today, we’re announcing AI for Accessibility, a new $25 million, five-year program aimed at harnessing the power of AI to amplify human capability for the more than one billion people around the world with disabilities. AI for Accessibility is a call to action for developers, NGOs, academics, researchers and inventors to accelerate their work for people with disabilities, focusing on three areas: employment, human connection and modern life. It includes grants, technology and AI expertise to accelerate the development of accessible and intelligent AI solutions and builds on recent advancements in Azure Cognitive Services to help developers create intelligent apps that can empower people with hearing, vision and other disabilities. Real-time speech-to-text transcription, visual recognition services and predictive text functionality that suggests words as people type are just a few examples. We’ve seen this impact through the launch of Seeing AI and alt-text which empowers people who are blind or low vision; as well as Helpicto, which helps people with autism.

If AI is the heart of how we can advance society, the intelligent cloud and the intelligent edge are the backbone. In the next 10 years, billions of everyday devices will be connected — smart devices that can see, listen, reason, predict and more, without a 24/7 dependence on the cloud. This is the intelligent edge, and it is the interface between the computer and the real world. The edge takes AI and cloud together to collect and make sense of new information, especially in scenarios that are too dangerous for humans or require new approaches to solve, whether they be on the factory floor or in the operating room.

Today we’re giving developers the tools and guidance to build these possibilities. For example, we’re making it easier to build apps at the edge by open sourcing the Azure IoT Edge Runtime, allowing customers to modify the runtime and customize applications at the edge. We’re giving developers Custom Vision — the first Azure Cognitive Service available for the edge — to build applications that use powerful AI algorithms that interpret, listen, speak and see for edge devices. And we are partnering across both DJI and Qualcomm. Microsoft and DJI, the world’s largest drone company, will collaborate to develop commercial drone solutions so that developers in key vertical segments such as agriculture, construction and public safety can build life-changing solutions, like applications that can help farmers produce more crops. With Qualcomm Technologies Inc., we announced a joint effort to create a vision AI dev kit running Azure IoT Edge, for camera-based IoT solutions. The camera can power advanced Azure services like machine learning and cognitive services that can be downloaded from Azure and run locally on the edge. Other advancements include a preview of Project Brainwave, an architecture for deep neural net processing, that is now available on Azure and on the edge. Project Brainwave makes Azure the fastest cloud to run real-time AI today.

We are also releasing new Azure Cognitive Services updates such as a unified Speech service that makes it easier for developers to add speech recognition, text-to-speech, customized voice models and translation to their applications. In addition, we’re making Azure the best place to develop conversational AI experiences integrated with any agent. New updates to Bot Framework, combined with our new Cognitive Services updates, will power the next generation of conversational bots, enabling richer dialogs and full personality and voice customization to match a company’s brand identity.

It was eight years ago when we shipped Kinect, which was the first AI device with speech, gaze and vision. We then took that technology forward with Microsoft HoloLens. We’ve seen developers build transformative solutions across a multitude of industries, from security to manufacturing to health care and more. As sensor technology has evolved, we see incredible possibilities for combining these sensors with the power of Azure AI services such as machine learning, Cognitive Services and IoT Edge.

Today we are excited to announce a new initiative, Project Kinect for Azure — a package of sensors from Microsoft that contains our unmatched time-of-flight depth camera, with onboard compute, in a small, power-efficient form factor — designed for AI on the edge. Project Kinect for Azure brings together this leading hardware technology with Azure AI to empower developers with new scenarios for working with ambient intelligence.

Similarly, our Speech Devices software development kit announced today delivers superior audio processing from multi-channel sources for more accurate speech recognition, including noise cancellation, far-field voice and more. With this SDK, developers can build for a variety of voice-enabled scenarios like drive-thru ordering systems, in-car or in-home assistants, smart speakers and other digital assistants.

This new age of technology is also fueled by mixed reality, which is opening up new possibilities in the workplace. Today we announced two new apps that will help empower firstline workers, the first workers to interface with customers and triage problems: Microsoft Remote Assist and Microsoft Layout. Microsoft Remote Assist enables remote collaboration via hands-free video calling, letting firstline workers share what they see with any expert on Microsoft Teams, while staying hands on to solve problems and complete tasks together. In a similar vein, Microsoft Layout lets workers design spaces in context with mixed reality, using 3D models for creating room layouts with holograms.

Whether creating a more inclusive and accessible world, solving problems that plague humanity or helping improve the way we work and live, developers are playing a leading role. As new ideas and solutions with AI and intelligent edge emerge, Microsoft will continue to advocate for developers and give them the tools and cloud services that make it possible to build these new solutions to solve real problems. From the top down, we are a developer-led company that continues to invest in coders and give them free rein to solve problems.

Learn more about how we’re empowering developers to build for this future today using Azure and M365, via blog posts from Executive Vice President of Cloud + AI Scott Guthrie and Corporate Vice President of Windows Joe Belfiore.

 

The post Advancing the future of society with AI and the intelligent edge appeared first on The Official Microsoft Blog.

Source: The Official Microsoft Blog