Home Technology News Microsoft and Nvidia introduce new AI tools for Windows

Microsoft and Nvidia introduce new AI tools for Windows

by Jack
2 minutes read
Microsoft and Nvidia introduce new AI tools for Windows

Microsoft and Nvidia introduce new AI tools for Windows. Why it is significant: Following the expanding popularity of generative AI, Microsoft and Nvidia have significantly boosted their efforts on AI, yet most of the technology relies on cloud servers. As AI-capable hardware becomes more widely available to consumers, the two businesses are releasing technologies to reduce customers’ dependency on remote AI systems.

Microsoft and Nvidia announced tools to help customers design and deploy generative AI applications locally at the recent Ignite 2023 event. The new programme takes advantage of Windows 11’s greater emphasis on AI, as well as popular AI models from Microsoft, Meta, and OpenAI.

Microsoft and Nvidia introduce new AI tools for Windows

Microsoft’s new Windows AI Studio combines a slew of models and development tools from Azure AI Studio and Hugging Face catalogues. It offers configuration interfaces, walkthroughs, and other tools to assist developers in creating and refining tiny language models.

Users can utilise Windows AI Studio to work with models such as Meta’s Llama 2 and Microsoft’s Phi. In the next weeks, Microsoft will distribute the workflow as a VS code extension. AI Studio’s local AI tasks may, presumably, take advantage of hardware such as Neural Processing Units, which will become more common in future CPU generations.

Meanwhile, Nvidia revealed a key upcoming update to TensorRT-LLM, promising to expand and speed up AI apps on Windows 11 computers while retaining data on local devices rather than cloud servers, perhaps alleviating some customers’ security concerns. The updates will be available for laptops, desktops, and workstations equipped with GeForce RTX graphics cards and at least 8GB of VRAM.

Also Read: Microsoft’s latest AI assistant will go with you to meetings

A wrapper that makes TensorRT-LLM compatible with OpenAI’s Chat API is one of the new features. Furthermore, when version 0.6.0 is released later this month, AI inference operations will be five times faster, and support for new large language models such as Mistral 7B and Nemotron-3 8B will be added on any RTX 3000 or 4000 GPU with at least 8GB of RAM.

The upgrade will be published on GitHub soon, and the latest optimised AI models will be available at ngc.nvidia.com. Those interested in the new AI Workbench model customisation toolbox can also now sign up for early access.

In related news, Microsoft has rebranded Bing’s AI-powered chatbot as Copilot. When users launch the Bing chat window in Edge or the new Copilot assistant in Windows 11, they may notice the moniker “Copilot with Bing Chat.”

Bing Chat first emerged as a chatbot within Edge before being integrated into the Copilot assistant, which launched with the current Windows 11 23H2 upgrade. Unifying the functionalities under one moniker could strengthen the interface’s position as Microsoft’s solution to ChatGPT.

Also Read: Microsoft 365 Copilot is now available for $9000 to some customers

You may also like

Leave a Comment