Intel Maximizes Intel Hardware Value with New Intel 2023
Intel unveiled the new Intel® One API 2023 tool on December 20.
This tool is provided through Intel® Developer Cloud and official retail channels.
The new Neap 2023 tool supports the Intel® Data Center GPU, including the 4th generation Intel® Leon® Scale processor, the Intel® Leon® CPU Max series, the Flex series and the new Max series.
Intel® One API 2023 offers improved performance and productivity and supports new Code play 1 plug-in so that developers can easily create SCL code for non-Intel GPU architecture.
Intel provides an option to allow users to choose hardware through the new standard-based tools, and helps to easily develop high-performance applications that run on multi-architecture systems.
The initial application performance has been improved in the development system that uses the Intel Max series GPU accelerator, said Timothy Williams.
I explained.
In addition, Leadership-grade computational science is important for multi-vendors such as Python AI frameworks such as SCL and PyTorch accelerated by the Intel library, and the benefits of providing the benefits of multi-vendors and multi-architecture programming standards-based code transplants.
Based on technology, I look forward to achieving the first scientific discovery in the Aurora system next year.
The new Intel 2023 developer tool includes the latest compilers, libraries, analysis and porting tools, optimized artificial intelligence and machine learning frameworks for building high-performance multi-architecture applications for CPUs, GPUs, and FPGA driven by One API.
.
Developers can quickly achieve their target performance using tools, save time using a single code base, and spend more time for innovation.
The new One API tool supports the following developers to take advantage of the advanced features of Intel hardware.
• Intel® Advanced Matrix extension (Intel® AMX), Intel® Quick Assistant Technology (Intel® QAT), Intel® AVX-512, BFLOAT16, etc.
• Intel data center GPUs, including the Flex series using hardware-based AV1 encoders, Max Series GPUs with data type flexibility, Intel® XE Matrix extension (Intel® XML)
Benchmark Example:
• MLP ERF Deep CAM Deep Learning Inference and Learning Performance Benchmarks are 2.4 times higher performance based on AMD products, while Intel One API Deep Neural Net Network Library (One DN) 2-based Intel® AMX is 3.6 times higher in performance with Intel CPU Max.
Achieved.
• In the case of large-scale and molecular parallel simulation (LAMPS) workloads that run off-loaded on six Max series GPUs and run on Intel Leon Max CPU optimized with one Neap tool
It recorded 16 times higher performance.
Advanced software performance:
• Intel® FORTRAN Compiler supports Port’s language standards such as Portray 2018, and expands Open MP GPU support to increase standard compliance application development speed
• Improved portability with Intel® One API Mass Kernel Library (ONEM) with expanded Open MP off-road function
• Intel® Neap Deep Neural Network Library (Onegin) supports advanced deep learning functions of the 4th generation Intel Leon and Intel Max CPU processor, including Intel AMX, Intel AVX-512, NNI and BFLOAT16.
Rich SCL support and powerful code migration and analysis tools enhance productivity by helping developers develop code for the multi-architecture system more easily.
• Intel Neap DPC ++/C ++ Compiler supports new plug-in of code play software for NVIDIA and AMD GPUs, simplifies the SCL code creation and expands code transplantation throughout these processors architectures.
This provides an integrated construction environment that includes an integrated tool for improving productivity between platforms.
Intel and Code Play plans to support the product first, starting with the One API plug-in for the NVIDIA GPU.
• More than 100 CUBA APIs are added to the Intel DPC ++ compatible tool based on open source Diplomatic, making SCL code migration in CUBA more easily.
• Users can identify MPI imbalances through Intel® TUNE ™ Profiler.
• Intel® Advisor adds automatic roofline analysis to the Intel Data Center GPU Max series, identifies and prioritizes the cause and priority of memory, cache or computing bottlenecks.
Provides practical insights to optimize data transmission reuse when off-road from CPU to GPU.
Since 48%of developers are aimed at heterogeneous systems using two or more processors, more efficient multi-architecture programming is needed to solve the problem of increasing the scope and scale of the actual workload.
Developers use Intel’s standard multi-architecture tools and use one-API, an open and integrated programming model, to freely choose hardware, performance, productivity and code transplantation for CPUs and accelerators.
The code written for an exclusive programming model, such as CUBA, lacks mobility to other hardware, and the tissue is an isolated development environment that traps the closed ecosystem.
A number of new centers of acceleration is established, and the rate of adopting One API in the ecosystem is increasing.
One of them is the Open Zettajoule Lab at the University of Cambridge, which focuses on porting the important Exact candidate code into One API, including Caste, Fence and Repo.
The center offers workshops with API methodologies for compilation, porting, and performance optimization, and experts who teach tools.
Currently, a total of 30 Neap Center of Excellence has been established.
Leave a Reply