
Best verified results on Llama 2 70B LoRA using 1x and 8x NVIDIA GB200 NVL72 systems, production-deployed to power APAC AI training at YTL.
TAIPEI, Nov. 17, 2025 /PRNewswire/ -- Wiwynn announced best results in the MLPerf® Training v5.1 Llama 2 70B LoRA benchmark (Closed division), earning best performance on both 1x and 8x NVIDIA GB200 NVL72 configurations. The submissions were executed on production systems already deployed by YTL AI Cloud, spanning a 1-rack NVIDIA GB200 NVL72 (with 72 NVIDIA Blackwell GPUs) and an 8-rack NVIDIA GB200 NVL72 integrating 576 GPUs—demonstrating leadership from single-rack to multi-rack scale.
The verified MLPerf scores highlight Wiwynn's strengths in system design, manufacturing, liquid cooling, multi-rack integration, and hardware/software co-optimization, combined with YTL's excellence in AI infrastructure integration and operations. Together, the partners demonstrate how close collaboration between system manufacturers and data center operators can deliver production-grade, benchmark-verified AI training performance.
"Wiwynn designs for workload optimization and real-world deployment," said William Lin, President and CEO at Wiwynn. "Our collaboration with YTL spans from L11 system integration to L12, delivering expanded infrastructure and software integration. Our solution built on NVIDIA GB200 NVL72 infrastructure at YTL shows how system engineering and software tuning unlock the full potential of large-scale GPU clusters."
"At YTL AI Cloud, we are building the region's most advanced AI infrastructure to serve as a premium hub for AI training and inferencing across APAC," said Philip Lin, CEO, YTL AI Cloud. "Our collaboration with Wiwynn demonstrates how the right facility design, infrastructure readiness, and system partnership can deliver world-class AI capability at production scale."
"Congratulations to Wiwynn on their strong achievements in MLPerf® Training v5.1. We appreciate their active, transparent participation and knowledge sharing, which strengthen the ecosystem and supports our mission of open collaboration to improve AI systems' accuracy, safety, speed, and efficiency," said David Kanter, Founder and Head of MLPerf, MLCommons.
YTL AI Cloud's large scale clusters are purpose built with the most advanced GPUs, liquid-cooled high-density racks, redundant power architecture, and low-latency interconnects. As a strategic AI hub for the Asia-Pacific region, YTL AI Cloud in Johor, Malaysia enables global and regional customers to deploy, train, and scale frontier AI models such as Llama 2 70B efficiently and sustainably.
By combining Wiwynn's cutting-edge system integration with YTL's robust AI data center foundation, the collaboration establishes a new standard for high-performance, scalable, and sustainable AI training infrastructure in the region.
| Footnote: [1] MLPerf® Training v5.1 Closed Llama 2 70B LoRA; systems: 1x NVIDIA GB200 NVL72 (72 GPUs) and 8x NVIDIA GB200 NVL72 (576 GPUs). Official results verified by MLCommons Association. Retrieved from the MLCommons results site on Nov. 12, 2025. The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information. |
About Wiwynn
Wiwynn is an innovative cloud IT infrastructure provider of high-quality computing and storage products, plus rack solutions for leading data centers. We are committed to the vision of "unleash the power of digitalization; ignite the innovation of sustainability". The company aggressively invests in next-generation technologies to provide the best TCO (Total Cost of Ownership), workload, and energy-optimized IT solutions from cloud to edge.
For more information, please visit Wiwynn website, Facebook, and Linkedin.
SOURCE Wiwynn
Share this article