Play Senua's Saga: Hellblade II with Game Pass on Xbox Series X|S

Play Senua’s Saga: Hellblade II with Game Pass on Xbox Series X|S

Play Senua's Saga: Hellblade II with Game Pass on Xbox Series X|S

Play Senua’s Saga: Hellblade II day one with Game Pass on Xbox Series X|S. Power Your Dreams with Xbox Series X|S:

Within the Xbox family of devices, Xbox Series X is our fastest, most powerful Xbox ever. Experience the thrill of native 4K gaming at up to 120 frames per second. Xbox Series X is perfect for the gamer looking to maximize their gaming experience.

Xbox Series S continues to be the best value in gaming. Featuring next-gen speed and performance. Choose from our 512GB Xbox Series S, or if you’re looking for more storage to download even more incredible games, the recently released Carbon Black Xbox Series S with 1TB of storage will be the best option for you.



Source: XBOX YouTube

Build the next wave of AI on Windows with DirectML support for PyTorch 2.2

Build the next wave of AI on Windows with DirectML support for PyTorch 2.2

Today, Windows developers can leverage PyTorch to run inference on the latest models across the breadth of GPUs in the Windows ecosystem, thanks to DirectML. We’ve updated  Torch-DirectML to use DirectML 1.13 for acceleration and support PyTorch 2.2. PyTorch with DirectML simplifies the setup process, through a one-package install, making it easy to try out AI powered experiences and supporting your ability to scale AI to your customers across Windows.

To see these updates in action, check out our Build session Bring AI experiences to all your Windows Devices.

See here to learn how our hardware vendor partners are making this experience great:

  • AMD: AMD is glad PyTorch with DirectML is enabling even more developers to run LLMs locally. Learn more about where else AMD is investing with DirectML.
  • Intel: Intel is excited to support Microsoft’s PyTorch with DirectML goals – see our blog to learn more about the full support that’s available today.
  • NVIDIA: NVIDIA looks forward to developers using the torch-directml package accelerated by RTX GPUs. Check out all the NVIDIA related Microsoft Build announcements around RTX AI PCs and their expanded collaboration with Microsoft.

PyTorch with DirectML is easy-to-use with the latest Generative AI models

PyTorch with DirectML provides an easy-to-use way for developers to try out the latest and greatest AI models on their Windows machine. This update builds on DirectML’s world class inferencing platform ensuring these optimizations provide a scalable and performant experience across the latest Generative AI models. Our aim in this update is to ensure a seamless experience with relevant Gen AI models, such as Llama 2, Llama 3, Mistral, Phi 2, and Phi 3 Mini, and we’ll expand our coverage even more in the coming months!

The best part is using the latest Torch-DirectML package with your Windows GPU is as simple as running:

pip install torch-directml

Once installed, check out our language model sample that will get you running a language model locally in no time! Start by installing a few requirements and logging into the Hugging Face CLI:

pip install –r requirements.txt 
huggingface-cli login

Next, run the following command, which downloads the specified Hugging Face model, optimizes it for DirectML, and runs the model in an interactive chat-based Gradio session!

python --model_repo “microsoft/Phi-3-mini-4k-instruct”

Build the next wave of AI on Windows with DirectML support for PyTorch 2.2

Phi 3 Mini 4K running locally using DirectML through the Gradio Chatbot interface.

These latest PyTorch with DirectML samples work across a range of machines and perform best on recent GPUs equipped with the newest drivers. Check out the Supported Models section of the sample for more info on the GPU memory requirements for each model.

This seamless inferencing experience is powered by our close co-engineering relationships with our hardware partners to make sure you get the most of your Windows GPU when leveraging DirectML.

Try out PyTorch with DirectML today

Trying out this update is truly as simple as running “pip install torch-directml” in your existing Python environment and following the instructions in one of our samples. For more guidance on getting setup visit the Enable PyTorch with DirectML on Windows page on Microsoft Learn.

This is only the beginning of the next chapter with DirectML and PyTorch! Stay tuned for broader use case coverage, expansion to other local accelerators, like NPUs, and more. Our goal is to meet developers where they’re at, so they can use the right tools to build the next wave of AI innovation.

We’re excited for developers to continue innovating with cutting edge Generative AI on Windows and build the AI apps of the future!

Source: Windows Blog

Quantization with DirectML helps you scale further on Windows

DirectML support for Phi 3 mini launched last month and we’ve since made several improvements, unlocking more models and even better performance!

Developers can grab already quantized versions of Phi-3 mini (with variants for the 4k and 128k versions). They can now also get Phi 3 medium (4k and 128k)  and Mistral v0.2. Stay tuned for additional pre-quantized models! We’ve also shipped a gradio interface to make easier to test these models with the new ONNX Runtime Generate() API. Learn more.

Be sure to check out our Build sessions to learn more. See below for details.

See here to learn what our hardware vendor partners have to say:

What is quantization?

Memory bandwidth is often a bottleneck for getting models to run on entry-level and older hardware, especially when it comes to language models. This means that making language models smaller directly translates to increasing the breadth of devices developers can target.

There’s been a lot of research into reducing model size through quantization, a process that reduces the precision and therefore size of model weights.

Our goal is to ensure scalability, while also maintaining model accuracy, so we integrated support for models that have had Activation-Aware Quantization (AWQ) applied to them. AWQ is a technique that lets us reap the memory savings from quantization with only a minimal impact on accuracy. AWQ achieves this by identifying the top 1% of salient weights that are needed for maintaining model accuracy and then quantizes the remaining 99% of weights. This leads to much less accuracy loss with AWQ compared to other techniques.

The average person reads up to 5 words/second. Thanks to the significant memory wins from AWQ, Phi-3-mini runs at this speed or faster on older discrete GPUs and even laptop integrated GPUs. This translates into being able to run Phi-3-mini on hundreds of millions of devices!

Check out our Build talk below to see this in action!

Perplexity measurements

Perplexity is a measure used to quantify how well a model predicts a sample. Without getting into the math of it all, a lower perplexity score means the model is more certain about its predictions and suggests that the model’s probability distribution is closer to the true distribution of the data.

Perplexity can be thought of as a way to quantify the average number of branches in front of a model at each decision point. At each step, a lower perplexity would mean that the model has fewer, more confident choices to make, which reflects a more refined understanding of the topic. A higher perplexity would mean more, less confident choices and therefore choices that are less predictable, relevant, and/or varied in quality.

As you can see below our data shows that AWQ leads to a small loss in model accuracy with only a small increase in perplexity. In return, using AWQ means 4x smaller model weights, leading to a dramatic increase in the number of devices that can run Phi-3-mini!

Model variant Dataset Base model perplexity AWQ perplexity Difference
Phi3 mini 128k wikitext2 14.42 14.81 0.39
Phi3 mini 128k ptb 31.39 33.63 2.24
Phi3 mini 4k wikitext2 15.83 16.52 0.69
Phi3 mini 4k ptb 31.98 34.3 2.32

Learn more

Be sure check out the these sessions at Build to learn more:

Get Started

Check out the ONNX Runtime Generate() API repo to get started today:

See here for our chat app with a handy gradio interface:

This lets developers choose from different types of language models that work best for their specific use case. Stay tuned for more!


We recommend upgrading to the latest drivers for the best performance.

Source: Windows Blog

Introducing the WebNN Developer Preview with DirectML

Introducing the WebNN Developer Preview with DirectML

We are excited to announce the availability of the developer preview for WebNN, a web standard for cross-platform and hardware-accelerated neural network inference in the browser, using DirectML and ONNX Runtime Web. This preview enables web developers to leverage the power and performance of DirectML across GPUs with support coming soon for Intel’s® Core™ Ultra processors with Intel® AI Boost and the Copilot+ PC, powered by Qualcomm® Hexagon™ NPUs.

Diagram showing how WebNN fits in the architechture

WebNN is a game-changer for web development. It’s an emerging web standard that defines how to run machine learning models in the browser, using the hardware acceleration of your local device’s GPU or NPU. This way, you can enjoy web applications that use machine learning without any extra software or plugins, and without compromising your privacy or security. WebNN opens up new possibilities for web applications, such as generative AI, object recognition, natural language processing, and more.

WebNN is a web standard that defines how to interface with different backends for hardware accelerated ML inference. One of the backends that WebNN can use is DirectML, which provides performant, cross-hardware ML acceleration across Windows devices. By leveraging DirectML, WebNN can benefit from the hardware scale, performance, and reliability of DirectML.

With WebNN, you can unleash the power of ML models in your web app. It offers you the core elements of ML, such as tensors, operators, and graphs. You can also combine it with ONNX Runtime Web, a JavaScript library that enables you to run ONNX models in the browser. ONNX Runtime Web includes a WebNN Execution Provider that simplifies your use of WebNN.

To learn more or to see this in action, be sure to check out our various Build sessions. See below for details.

See here to learn what our hardware vendor partners have to say:

  • AMD: AMD is excited about the launch of WebNN with DirectML enabling local execution of generative AI machine learning models on AMD hardware. Learn more about where else AMD is investing with DirectML.
  • Intel: Intel looks forward to the new possibilities WebNN and DirectML bring to web developers – learn more here about our investments in WebNN. Please download the latest driver for best performance.
  • NVIDIA: NVIDIA is excited to see DirectML powering WebNN to bring even more ways for web apps to leverage hardware acceleration on RTX GPUs. Check out all the NVIDIA related Microsoft Build announcements around RTX AI PCs and their expanded collaboration with Microsoft.

Getting Started with the WebNN Developer Preview

With the WebNN Developer Preview, powered by DirectML and ONNX Runtime Web, you can run ONNX models in the browser with hardware acceleration and minimal code changes.

To get started with WebNN on DirectX 12 compatible GPUs you will need:

  • Window 11, version 21H2 or newer
  • ONNX Runtime Web minimum version 1.18
  • Microsoft Edge Canary channel, with the WebNN flag enabled in about:flags

For more instructions and information about supported models and operators, please visit our documentation. To try out samples, please visit the WebNN Developer Preview page.

Learn more

Be sure to check out these sessions at Microsoft’s Build Conference to learn more about WebNN:

Additional WebNN documentation and samples:

Source: Windows Blog

Climb to New Heights with Grappin, Available for Preorder Now

Climb to New Heights with Grappin, Available for Preorder Now

Hello, Xbox community!

I’m Ahmin Hafidi, game designer at Polylabo, a micro (emphasis on the micro) game studio based in Tokyo, Japan.

Today, I would like to introduce you to Polylabo’s first game, Grappin.

Grappin is a first-person adventure game focused on exploration, action and grappling hook action.

You wake up alone in your village, heavy rain and it’s dark. Your only way is forward and soon enough, you will stumble upon the Grip, a mysterious artifact that you also can use as a grapple hook. Very convenient, right?

Your goal is set: you need to bring back the Grip to the Grip Shrine, perched on top of the highest mountain. Easier said than done, believe me!

Grappin screenshot

In Grappin, you’ll embark on an epic first-person adventure to reach the top of the mountain. A variety of biomes are waiting for you to explore, each with its own unique challenges. Start your journey in a wide valley at the base of the mountain, navigate through blazing hot lava caves and brave a dangerous icy ridge as you make your way to the summit. And more!

Climb to New Heights with Grappin, Available for Preorder Now

The core of Grappin‘s gameplay revolves around the versatile grappling hook called the Grip. The Grip has two distinct forms: the Normal Grip, which allows you to hookshot yourself to clay surfaces, and the Trace Grip, which also hook to clay surfaces but capable of covering very long distances at high speed, making it ideal for backtracking and exploration.

Grappin screenshot

There are over 50 relics scattered throughout the world waiting to be discovered. The legend says that getting them all will uncover the mystery surrounding the mountain and the Grip itself, rewarding those who take the time to explore every nook and cranny. Exploration is key!

Grappin screenshot

Me and Benoit, the game composer, have poured our hearts into the making of this game and we hope that you will enjoy your journey to the top of the mountain.

It has been a dream of mine of releasing my own independent title on consoles and I’m beyond happy to finally say this:

Grappin is now available for pre-orders on Xbox Series X|S, Xbox One and PC!

Launching on June 6!

Enough talk, now go on sharpening your grapple hook, put on your polar coat and scarf!

Are you ready for a gripping adventure?

Xbox Live






Experience the thrill of adventure and put to the test your climbing and platforming skills in GRAPPIN, the first-person adventure game that takes you to new heights.


After a mysterious awakening, you discover the Grip, an artifact that serves as your grappling hook. Your mission is clear: return the Grip to the Grip Shrine, located at the summit of the highest mountain.


Navigate through challenging environments and overcome obstacles as you journey to the top of the mountain. From blazing hot lava caves to treacherous ridges, you’ll need to master the use of your grappling hook to survive. As you explore, uncover more than 50 Relics to unravel the mystery surrounding the Grip and the mountain.

Are you ready for a gripping adventure?

The post Climb to New Heights with Grappin, Available for Preorder Now appeared first on Xbox Wire.

Source: Xbox Blog

Share of the Week: Mythical

Share of the Week: Mythical

Last week we asked you to tap into mythology and legend, sharing creatures and characters found in games inspired by myth using #PSshare #PSBlog. Here are this week’s highlights:

Share of the Week: Mythical

Tigas_VP shares Geralt gazing up at a griffin in The Witcher 3

Share of the Week: Mythical

call_me_xavii shares Aloy gazing up at a hologram of Poseidon, a remnant of Las Vegas in Horizon Forbidden West

Share of the Week: Mythical

reins62831 shares an enemy harpy from Dragon’s Dogma 2

Share of the Week: Mythical

CowboyDbop92 shares the Eikon Odin from Final Fantasy XVI

Share of the Week: Mythical

Defalt368 shares Nuna and Fox approaching a large spirit based on Iñupiat storytelling in Never Alone

Share of the Week: Mythical

ValkyrieQ8 shares a samurai in a kitsune spirit mask in Ghost of Tsushima

Search #PSshare #PSBlog on Twitter or Instagram to see more entries to this week’s theme. Want to be featured in the next Share of the Week?

Share of the Week: Mythical

SUBMIT BY: 11:59 PM PT on May 29, 2024

Next week, take a leap! Share thrilling moments with characters leaping or jumping into action from the game of your choice using #PSshare #PSBlog for a chance to be featured.

Source: Playstation Blog