Close Menu
    What's Hot

    Link Infinite: Hollyland Pyro Ultra Simplifies Multi-User Monitoring with 4K60 Wireless

    April 18, 2026

    AI Match Predictions, Live Table Projections, and More: Tribuna.com Releases Full Feature Breakdown for the 2026 FIFA World Cup

    April 17, 2026

    Malaysia halal exports rise 10.9% to RM68.52 billion

    April 17, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Link Infinite: Hollyland Pyro Ultra Simplifies Multi-User Monitoring with 4K60 Wireless
    • AI Match Predictions, Live Table Projections, and More: Tribuna.com Releases Full Feature Breakdown for the 2026 FIFA World Cup
    • Malaysia halal exports rise 10.9% to RM68.52 billion
    • AutoFlight Completes First 2-Ton-Class eVTOL Tea Delivery in China
    • RideFlux wins South Korea’s first paid freight permit
    • Clé de Peau Beauté Renews Global Partnership with UNICEF, Aiming to Reach an Additional 7.3 Million Girls
    • LG ELECTRONICS TO SHOWCASE NEW DISHWASHER LINEUP AT EUROCUCINA 2026
    • STARTRADER Revamps Community Basketball Court Serving 10,000 Youth in Vietnam
    • Home
    • Contact Us
    Cairo GuardianCairo Guardian
    Sunday, April 19
    • Automotive
    • Business
    • Entertainment
    • Health
    • Lifestyle
    • Luxury
    • News
    • Sports
    • Technology
    • Travel
    Cairo GuardianCairo Guardian
    Home » Microsoft launches Maia 200 chip for Azure AI inference

    Microsoft launches Maia 200 chip for Azure AI inference

    January 28, 2026 Technology
    Share
    Facebook Twitter LinkedIn Pinterest Email

    MENA Newswire, SAN FRANCISCO: Microsoft on Jan. 26 introduced Maia 200, the second generation of its in-house artificial intelligence accelerator, built to run AI models in production across Azure data centres. The company said Maia 200 is designed for inference, the stage where trained models generate responses to live requests, and will be used to support a range of Microsoft AI services.

    Microsoft launches Maia 200 chip for Azure AI inference
    Microsoft Maia 200 targets faster AI inference in Azure data centers using custom silicon. (AI-generated image)

    Maia 200 is manufactured on TSMC’s 3-nanometer process and includes more than 140 billion transistors, Microsoft said. The chip pairs compute with a new memory system that includes 216 gigabytes of HBM3e high-bandwidth memory and about 272 megabytes of on-chip SRAM, aimed at sustaining large-scale token generation and other inference-heavy workloads.

    Microsoft said Maia 200 delivers more than 10 petaflops of performance at 4-bit precision and about 5 petaflops at 8-bit precision, formats commonly used to run modern generative AI efficiently. The company also said the system is designed around a 750-watt power envelope and is built with scalable networking so chips can be linked for larger deployments.

    The company said the new hardware has begun coming online in an Azure U.S. Central data centre in Iowa, with an additional location planned in Arizona. Microsoft described Maia 200 as its most efficient inference system deployed to date, reporting a 30% improvement in performance per dollar compared with its existing inference systems.

    AI inference focus and Azure deployment

    Microsoft said Maia 200 is intended to support AI products and services that rely on high-volume, low-latency model execution, including workloads running in Azure and Microsoft’s own applications. The company said it has designed the chip and the surrounding system as part of an end-to-end infrastructure approach that includes silicon, servers, networking and software for deploying AI models at scale.

    Alongside the chip, Microsoft announced early access to a Maia software development kit for developers and researchers working on model optimization. The company said the tooling is aimed at helping teams compile and tune models for Maia-based systems, and is structured to fit into common AI development workflows used for deploying inference in the cloud.

    Performance claims and model support

    Microsoft said Maia 200 is built to run large language models and advanced reasoning systems, and that it will be used for internal and hosted model deployments in Azure. The company has positioned the chip as a production inference accelerator, distinguishing it from training-focused systems that are typically used to build models before deployment.

    Microsoft has accelerated custom silicon work as demand has grown for compute to serve generative AI applications, where costs and availability of accelerators can affect how quickly services scale. Maia 200 follows Maia 100, which Microsoft introduced in 2023, and represents the company’s latest iteration of its dedicated AI accelerator line for datacenter inference.

    Keep Reading

    Malaysia halal exports rise 10.9% to RM68.52 billion

    RideFlux wins South Korea’s first paid freight permit

    UAE president and EU Council chief discuss regional security

    South Korea auto exports rise on March hybrid demand

    Sheikh Khaled begins Beijing visit to deepen UAE-China ties

    Bank of Korea keeps rate at 2.5% for seventh hold

    Latest News

    Malaysia halal exports rise 10.9% to RM68.52 billion

    April 17, 2026

    RideFlux wins South Korea’s first paid freight permit

    April 16, 2026

    UAE president and EU Council chief discuss regional security

    April 15, 2026

    South Korea auto exports rise on March hybrid demand

    April 15, 2026

    Sheikh Khaled begins Beijing visit to deepen UAE-China ties

    April 13, 2026

    Bank of Korea keeps rate at 2.5% for seventh hold

    April 11, 2026

    China auto output and sales jump in March

    April 11, 2026

    China inflation hits 1% in March as PPI turns positive

    April 10, 2026
    © 2026 Cairo Guardian | All Rights Reserved
    • Home
    • Contact Us

    Type above and press Enter to search. Press Esc to cancel.