Second Generation

intel® xeon® processor scalable family

custom configure your
intel® xeon® scalable platform

TAAcompliant

all solutions are
taa compliant

The intel® xeon® scalable platforms are also available on our GSA IT Schedule 70, NASA SEWP V, and NITAAC CIO-CS CONTRACTS.

Integrated into koi computers’ expertly engineered HPC Technology These Processors deliver outstanding performance

The Intel® Xeon® Processor Scalable Family provides the foundation for a powerful data center platform. Disruptive by design, this innovative processor sets a new level of platform convergence and capability across compute, storage, memory, network, and security. Organizations can now drive forward their most ambitious digital initiatives with a feature-rich, highly versatile platform.

Intel® Deep Learning Boost

  • Accelerates AI/deep learning/vision workloads up to 14X 1 the inference throughput performance over previous generation processors.

Intel® Optane™ DC persistent memory

  • Speed workloads and time to insight with this new revolutionary memory product for affordable, persistent, and large memory.

Integrated Intel® QuickAssist Technology (Intel® QAT)

  • Data compression and cryptography acceleration, frees the host processor, and enhances data transport and protection across server, storage, network, and VM migration. Integrated in the chipset.

Intel® Resource Director Technology for Determinism

  • Extend Quality of Service (QoS) with memory bandwidth allocation.

Enhanced Security

  • Hardware mitigations for side-channel exploits help protect systems and data by hardening the platform against any malicious attacks.

Extended Availability of Support

  • 15-year product availability and 10-year use-case reliability helps protect your investment.

Performance to Propel Insights

Intel's industry-leading, workload-optimized platform with built-in AI acceleration, provides the seamless performance foundation for the data-centric era from the multicloud to intelligent edge, and back, the Intel® Xeon® Scalable processor family with 2nd Gen Intel® Xeon® Scalable processors enables a new level of consistent, pervasive, and breakthrough performance.

Overview of 2nd Gen Intel® Xeon® Processor Intel® Xeon® Scalable Platform

intel® xeon® PLATINUM 9200 PROCESSORS

Designed for high performance computing, advanced artificial intelligence and analytics, the Intel® Xeon® Platinum 9200 processors deliver breakthrough levels of performance with the highest Intel® Architecture FLOPS per rack, along with the highest DDR native memory bandwidth support of any Intel® Xeon® processor platform.
  • Up to 56 Intel® Xeon® Scalable processing cores per processor
    • Two processors per 2U platform (Intel® Server System S9200WK Data Center Block)
  • 12 memory channels per processors, 24 memory channels per node
  • Features new Intel® Deep Learning Boost instruction for
    enhanced AI inference acceleration and performance
  • Enchanced multi-chip package optimized for density and performance

intel® xeon® PLATINUM 8200 PROCESSORS

Second generation Intel® Xeon® Platinum 8200 processors are the foundation for secure, agile, hybrid- cloud data centers. With enhanced hardware-based security and exceptional two, four, and eight+ socket processing performance, these processors are built for mission-critical, real-time analytics, machine learning, artificial intelligence and multi-cloud workloads. With trusted, hardware-enhanced data service delivery, this processor family delivers monumental leaps in I/O, memory, storage, and network technologies to harness actionable insights from our increasingly data-fueled world.

intel® xeon® gold 6200 AND intel® xeon® gold 5200 processors

With support for the higher memory speeds, enhanced memory capacity, and four-socket scalability, the Intel® Xeon® Gold 6200 processors deliver significant improvement in performance, advanced reliability, and hardware-enhanced security. It is optimized for demanding mainstream data center, multi-cloud compute, and network and storage workloads. The Intel® Xeon® Gold 5200 processors deliver improved performance with affordable advanced reliability and hardware-enhanced security. With up-to four-socket scalability, it is suitable for an expanded range of workloads.

intel® xeon® silver 4200 PROCESSORS

Intel® Xeon® Silver processors deliver essential performance, improved memory speed, and power efficiency. Hardware-enhanced performance required for entry data center computes, network, and storage.

ENTRY-LEVEL PERFORMANCE AND HW-ENHANCED SECURITY

The Intel® Xeon® Bronze processors delivers entry performance for small business and basic storage servers. Hardware-enhanced reliability, availability, and serviceability features designed to meet the needs of these entry solutions.

contact us to purchase your customized
intel® xeon® scalable platform today.

FOOTNOTES
  • 5.   Up to 3.50X 5-Year Refresh Performance Improvement VM density compared to Intel® Xeon® E5-2600 v6 processor: 1 node, 2 x E5-2697 v2 on Canon Pass with 256GB (16 slots / 16GB / 1600) total memory, ucode 0x42c on RHEL7.6, 3.10.0-957 el7.x86_65, 1 x Intel 400GB SSD IS Drive, 2x P4500 4TB PCIe, 2*82599 dual port Ethernet, Virtualization Benchmark, VM kernel 4,19, HT on, Turbo on, score: VM density=74, test by Intel on 1/15/2019. vs. 1-node, 2x 8280 on Wolf Pass with 768 GB (24 slots/ 32GB / 2666) total memory, ucode Ox2000056 on RHEL7.6, 3.10.0-957. el7.x86_65, 1x Intel 400GB SSD OS Drive, 2x P4500 4TB PCle, 2*82599 dual port Ethernet, Virtualization Benchmark, VM kernel 4.19, HT on, Turbo on, score: VM density=21, test by Intel on 1/15/2019.

  • 6.   1.33X Average Performance lmprovement compared to Intel® Xeon® Gold 5100 Processor: Geomean of est SPECrate2017_int_base, est SPECrate2017_fp_base, Stream Triad, Intel Distribution of Linpack, server side Java. Gold 5218 vs Gold 5118: 1-node, 2x Intel® Xeon® Gold 5218 cpu on Wolf Pass with 384 GB (12X 32GB 2933 (2666)) total memory, ucode 0x4000013 on RHEL7.6, 3.10.0-957. el7.x86_65, IC18u2, AVX2, HT on all (off Stream, Linpack), Turbo on, result: est int throughput=162, est fp throughput=172, Stream Triad=185, Linpack=1088, server side java=98333, test by Intel on 12/7/2018. 1-node, 2x Intel® Xeon® Gold 5118 cpu on Wolf Pass with 384 GB (12 X 32GB 2666 (2400)) total memory, ucode 0x200004D on RHEL7.6, 3.10.0-957.el7.x86_65, IC18u2, AVX2, HT on all (off Stream, Linpack), Turbo on, result: est int throughput=119, est fp throughput=134, Stream Triad=148.6, Linpack=822, server side java=67434, test by Intel on11/12/2018.

  • 7.   Up to 14X Al Performance lmprovement with Intel® DL Boost compared to Intel® Xeon® Platinum 8180 Processor (July 2017). Tested by Intel as of 2/20/2019. 2 socket Intel® Xeon® Platinum 8280 Processor, 28 cores HT On Turbo ON Total Memory 384 GB (12 slots / 32GB / 2933 MHz), BIOS: SE5C620.86B.0D.01.0271.120720180605 (ucode: 0x200004d), Ubuntu 18.04.1 LTS, kernel 4.15.0-45-generic, SSD 1x sda INTEL SSDSC2BA80 SSD 745.2GB, nvme1n1 INTEL SSDPE2KX040T7 SSD 3.7TB, Deep Learning Framework: Intel® Optimization for Caffe version: 1.1.3 (commit hash: 7010334f159da247db3fe3a9d96a3116ca06b09a), ICC version 18.0.1, MKL DNN version: v0.17 (commit hash: 830a10059a018cd2634d94195140cf2d8790a75a, model: https://github.com/intel/caffe/blob/master/models/intel_optimized_models/int8/resnet50_int8_full_conv.prototxt,BS=64, DummyData, 4 instance/2 socket, Datatype: INT8 vs Tested by Intel as of July 11th 2017: 2S Intel® Xeon® Platinum 8180 CPU@ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to "performance" via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC 3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC). Performance measured with: Environment variables: KMP AFFINITY='granularity=fine, compact', OMP _NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: : (http://github.com/intel/caffe/),revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with "caffe time –forward_only" command, training measured with "caffe time" command. For "ConvNet" topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),. Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with "numactl -l".