Intel Announces New AI Data Center GPU Code-Named Crescent Island for 2026
Intel announced a new addition to its AI accelerator portfolio at the 2025 OCP Global Summit, unveiling a data center GPU code-named Crescent Island designed specifically for AI inference workloads. The new GPU is expected to offer high memory capacity and energy-efficient performance to meet growing demands in the AI inference market.
Product Specifications and Features
The Crescent Island GPU is being designed as a power and cost-optimized solution for air-cooled enterprise servers. The chip will incorporate large amounts of memory capacity and bandwidth, specifically optimized for inference workflows. Key technical specifications include the Xe3P microarchitecture with optimized performance-per-watt capabilities and 160 GB of LPDDR5X memory.
The GPU will support a broad range of data types, making it ideal for “tokens-as-a-service” providers and various inference use cases. Intel is positioning the product to handle the increasing token volumes that come with expanding AI inference applications.
40+ reviews
Find the Latest Data Center Facility Projects Around the World
Gain exclusive access to our industry-leading database of Data Center Facility opportunities with detailed project timelines and stakeholder information.
Collect Your Free Leads Here!
No credit cardUp-to-date coverage
Joined by 750+ industry professionals last month
Market Strategy and Timeline
Customer sampling of the Crescent Island GPU is expected to begin in the second half of 2026. Intel's open and unified software stack for heterogeneous AI systems is currently being developed and tested on Arc Pro B-Series GPUs to enable early optimizations and iterations ahead of the Crescent Island launch. Sachin Katti, CTO of Intel, explained the company's strategic direction:
“AI is shifting from static training to real-time, everywhere inference, driven by agentic AI. Scaling these complex workloads requires heterogeneous systems that match the right silicon to the right task, powered by an open software stack. Intel's Xe architecture data center GPU will provide the efficient headroom customers need, and more value, as token volumes surge.”
Industry Context and Intel's Position
Intel emphasized that as inference becomes the dominant AI workload, success requires more than powerful chips and depends on systems-level innovation. The company stated that inference requires a workload-centric, open approach, integrating diverse compute types with a developer-first software stack, delivered as systems that are easy to deploy and scale.
Intel positions itself to deliver end-to-end solutions from the AI PC to the data center and industrial edge, with offerings built on Intel Xeon 6 processors and Intel GPUs. The company is focusing on co-designing systems for performance, energy efficiency, and developer continuity while collaborating with communities like the Open Compute Project (OCP) to enable AI inference deployment across various environments.
The announcement comes as Intel continues to expand its presence in the AI accelerator market, with the Crescent Island GPU representing a key component in the company's broader AI infrastructure strategy. The product is specifically designed to meet the evolving needs of AI inference applications as the industry shifts toward more distributed and real-time AI processing requirements.
Connect with Decision-makers about the Latest Data Center Facility Projects Around the World for business Opportunities.
Subscribe to our database on Data Center Facility Projects and Tenders Around the World to get access to reliable and high-quality insights on upcoming, under-construction, and completed Data Center Facility Projects across the world or in your desired geographical location.
Our user-friendly platform provides essential details, timely updates, key stakeholder contact information, and business opportunities tailored for engineering companies, industry professionals, investors, and government agencies.
Leave a Comment
We love hearing from our readers and value your feedback. If you have any questions or comments about our content, feel free to leave a comment below.
We read every comment and do our best to respond to them all.