One Server - Two GPUs
On-Demand GPU Servers with Dual Intel® MAX 1100
Dramatically accelerate AI/ML and HPC workloads with minimal investments.
-
2 x Intel MAX 1100 GPUs
-
56 Xe cores + 48 GB HBM2E memory per unit
-
GPU-to-GPU Intel Xe Link Bridge
-
No licensing fees for GPU features
-
Provisioning via API, CLI, WebUI, or IaC
Already have an account? Log in

Unlock Breakthrough Performance for Data-Intensive Computing
To get the most out of rising technologies such as AI or ML, you need an IT infrastructure that can handle inferencing, model enhancement, and High-Performance Computing (HPC) with ease. phoenixNAP’s Bare Metal Cloud lets you pair state-of-the-art hardware with a low-latency, high-throughput network.
Provision dedicated servers featuring next-gen GPU technologies in minutes using the platform’s API, CLI, or web portal. Choose pre-configured servers powered by dual Intel® Max 1100 GPUs and 4th Gen Intel Xeon® Scalable CPUs to cost-effectively accelerate end-to-end AI and analytics pipelines with no licensing fees or inconsistencies in CPU and GPU implementation.
Benefits
The Future of AI Computing. Just a Few Clicks Away.
Locations
Phoenix, AZ
Ashburn, VA
More Coming Soon...
Phoenix, AZ
Ashburn, VA
More Coming Soon...
We Value
Your Trust
We pride ourselves on being the preferred choice for our clients.
Instances
Meet Your Workload-Optimized GPU Servers
Bare Metal Cloud helps you easily build AI infrastructure clusters running on dedicated resources. Take your pick of the server instances powered by Intel Max 1100 GPUs. Spin up your pre-configured server with a few clicks or a single API call and tap into the raw performance of bare metal with cloud-like flexibility.
Instance Type | CPU | Specs | Hourly | Monthly | 12 mo | 24 mo | 36 mo |
---|---|---|---|---|---|---|---|
d3.g2.c1.xlarge |
|
SGX enabled - c x GHz
(GB RAM, , ) 2x Intel MAX 1100 GPU |
|
|
|
|
|
d3.g2.c2.xlarge |
|
SGX enabled - c x GHz
(GB RAM, , ) 2x Intel MAX 1100 GPU |
|
|
|
|
|
d3.g2.c3.xlarge |
|
SGX enabled - c x GHz
(GB RAM, , ) 2x Intel MAX 1100 GPU |
|
|
|
|
|
+ 15 TB Free Bandwidth + On-demand access to terabytes of scale-out Network Storage or S3-compatible Object Storage |
Instance Type
d3.g2.c1.xlarge
CPU
SpecsSGX enabled - c x
GHz (GB RAM,
,
)
2x Intel MAX 1100 GPUBilling Model
Hourly
Monthly
12 mo
24 mo
36 mo
d3.g2.c2.xlarge
CPU
SpecsSGX enabled - c x
GHz (GB RAM,
,
)
2x Intel MAX 1100 GPUBilling Model
Hourly
Monthly
12 mo
24 mo
36 mo
d3.g2.c3.xlarge
CPU
SpecsSGX enabled - c x
GHz (GB RAM,
,
)
2x Intel MAX 1100 GPUBilling Model
Hourly
Monthly
12 mo
24 mo
36 mo
Features
Leverage Intel Max-Powered GPU Servers as a Service
Running your data-intensive computing models in a highly parallel environment has never been this simple or affordable. Access Intel’s future-proof technologies through API-driven bare metal servers optimized for high-performance computing. Optimize your IT spend, enhance existing AI models, and speed up inferencing or real-time analytics without code refactoring or infrastructure management complexities.
Integration
Intel Max 1100 GPU in Bare Metal Cloud
Through automation, Bare Metal Cloud lets you reap all the benefits of dedicated compute, storage, and network resources without the hassle of manual infrastructure provisioning. With streamlined access to dedicated servers with Intel’s highest-density GPUs and AI accelerator-packed CPUs, you can scale your resources and optimize the performance of your GPU-intensive workloads at will.
AI and HPC Acceleration Powered by Bare Metal Cloud | ||
---|---|---|
Intel Max 1100 GPUs | 4th Gen Intel Xeon Scalable CPUs | phoenixNAP’s Bare Metal Cloud |
1 server – 2 GPUs Dual Intel Max 1100 GPU cards connected via Intel’s high-speed Xe Link Bridges. |
More Cores + Increased Multi-Socket Bandwidth Up to 1.53x gen-on-gen performance gain. |
Global and edge deployment in a matter of minutes.
|
108 MB L2 cache Delivers up to 2x performance gain over competition for HPC and AI workloads. |
Intel AMX Accelerates DL inference and AI training workloads. |
Dedicated servers equipped with high-performance DDR5 RAM and NVMe storage.
|
56 GPU Cores Allows for parallel processing of complex models with 2x 56 Xe cores. |
Intel Data Streaming Accelerator (DSA) Boosts the performance of storage, networking, and data-intensive workloads. |
Direct access to hardware without vendor lock-in or hypervisor overhead.
|
48 GB of HBM2E Memory Supports large and complex DL models and datasets without memory constraints. |
Intel Dynamic Load Balancer (DLB) Dynamically distributes network data across CPU cores. |
API-driven provisioning and management via IaC tools (Terraform, Ansible, Pulumi, Chef).
|
Intel XMX Performs up to 256x Int8 operations per clock for faster AI training and inference. |
Intel QuickAssist Technology (QAT) Accelerates encryption, decryption, and data compression. |
Easy deployment of terabytes of low-cost network storage or S3-compatible object storage.
|
Ray Tracing Acceleration Delivers faster scientific visualization and animation with 56 ray tracing units. |
Intel In-Memory Analytics Accelerator (IAA) Improves analytics performance and accelerates database query throughput. |
50 Gbps network speed. 20 Gbps free DDoS protection. Public and private network options.
|
Shared Code Investments Optimizes CPU and GPU performance without duplicating your development efforts. |
Intel Software Guard Extensions (SGX) Isolates sensitive data in a secure enclave with hardware-based memory protection. |
Flexible billing and bandwidth models. Pay-as-you-go options with discounts for reservations.
|
Use Cases
The Perfect Infrastructure for Data-Intensive Applications
FAQ
Frequently Asked Questions
Intel Data Center GPU MAX 1100 GPU is Intel’s latest and fastest GPU. With over 100 billion transistors, up to 128 Xe Cores, built-in ray tracking acceleration, and Intel’s Xe Matrix Extensions (XMX), it is designed to significantly boost AI inference and HPC workloads.
Depending on the operating system and the server instance you choose during the deployment process, you can deploy a pre-configured Bare Metal Cloud server in anywhere from 60 seconds to 15 minutes.
These server instances are workload-optimized and designed to accelerate AI/ML inference, enhance already trained models, boost high-performance computing (HPC) or data and database analytics processing speed, or streamline E-Commerce or AdTech workloads.
While Bare Metal Cloud instances with Intel MAX 1100 GPUs are currently available in Phoenix (AZ) and Ashburn (VA) only, the platform offers over 50 different server instances deployable across the U.S., Europe, and Asia-Pacific.
All Bare Metal Cloud servers feature enterprise-grade local NVMe drives. Additionally, Bare Metal Cloud lets you easily and near-instantly scale your storage capacity with terabytes of VAST Network File Storage or S3-compatible object storage.
Bare Metal Cloud servers with Intel MAX 1100 GPUs can be consumed through a 1, 12, 24, or 36-month reservation. There are no additional licensing fees for any Intel MAX GPU features you use. Discounts are available for monthly or yearly reservations. Billing and usage info is transparent and accessible via the platform’s self-service portal.
Need more info?
Let's get in touch!
Contact us today and revolutionize the way you process data-intensive workloads! Our Sales team will get back to you within two business days to help you quickly find the right solution for your use case.
Call Us
Questions about our product or pricing? Call for support.
Chat With Us
Our Sales team is at your disposal throughout your infrastructure upgrade.
Email Us
Send us an email and find out more about the product or pricing.