REGISTER
backtop
SkypeSkype
XRender News CenterXRender News Center

On Monday, NVIDIA CEO Jensen Huang announced the progress of a number of products and services related to artificial intelligence (AI) and accelerated computing at Computex 2023. Among all the news, one of the most notable one should be the full production of its Grace Hopper superchip. These chips are core components of NVIDIA's new DGX GH200 artificial intelligence supercomputing platform and MGX systems, which are designed to handle massive generative artificial intelligence tasks. In addition, NVIDIA also released its new Spectrum-X Ethernet networking platform, which is optimized for artificial intelligence servers and supercomputing clusters.

Main themes in Computex 2023

|DGX GH200 – AI Supercomputer with Large Memory

According to Jensen Huang, the AI supercomputer DGX GH200 integrates NVIDIA's most advanced accelerated computing and network technologies, and aims to develop large-scale next-generation models for generative AI language applications, recommendation systems, and data analysis workloads. It is expected to be available by the end of this year.

DGX GH200

Huang said that the DGX GH200 is the first supercomputer to match the GH200 Grace Hopper super chip with the NVIDIA NVLink Switch system. By adopting a new interconnection method, 256 Grace Hopper super chips are connected together so that they can act like a single giant GPU. Working together, this provides 1EFLOPS of performance and 144TB of shared memory, nearly 500 times more memory than the previous-generation DGX A100 320GB system launched in 2020.

DGX GH200

NVIDIA will provide reference blueprints of the DGX GH200 to its major customers Google, Meta and Microsoft, and will also use the system as a reference architecture design for cloud service providers and hyperscale data centers. NVIDIA itself will also deploy a new NVIDIA Helios supercomputer, consisting of four DGX GH200 systems, for its own R&D work. The four systems have a total of 1,024 Grace Hopper chips and are connected by NVIDIA's Quantum-2 InfiniBand 400 Gb/s network.

| NVIDIA ACE – AI Cloud Engine for Gaming

At Computex 2023, NVIDIA launched Nvidia ACE (Avatar Cloud Engine) for games. This is a custom AI model foundry service that middleware, tool and game developers can use to build and deploy custom speech, dialogue and animation AI models. It empowers non-player characters (NPCs) with intelligent and evolving dialogue skills, allowing them to answer player questions with a lifelike personality.

During the conference, NVIDIA released Kairos, a mind-blowing real-time demo that showcases how the powers of generative AI, Unreal Engine 5, and NVIDIA's RTX and DLSS can be leveraged to create lifelike non-playable characters (NPCs). The demo was built by NVIDIA and partner Convai to give the world a glimpse on what kind of sparks may the collision of games and AI produce.

Screenshot in Kairos demo

Tools used for creating the demo including:

NeMo for language models and model customization tools for tuning the game characters;

Riva for automatic speech recognition (ASR) and text-to-speech (TTS) capabilities to enable live speech conversation;

NVIDIA Omniverse Audio2Face for expressive facial animations for game characters from just an audio source;

For more creation process of the demo, you may click here to visit the technical blog with NVIDIA developer.

| NVIDIA MGX Server Specifications for System Makers

To meet the needs of data centers of all sizes, Jensen Huang released the NVIDIA MGX server specification, which provides a modular reference architecture for system manufacturers. System builders can use it to quickly and cost-effectively build more than 100 server configurations for a wide range of AI, HPC and NVIDIA Omniverse applications. MGX supports NVIDIA's full range of GPUs, CPUs, DPUs and network adapters, as well as various x86 and Arm processors. Its modular design enables system builders to more efficiently meet each customer's unique budget, power delivery, thermal design, and mechanical requirements.

NVIDIA MGX

ASRockRack, ASUS, GIGABYTE, Pegatron, QCT, Supermicro, etc. will use MGX to build next-generation accelerated computers, which can cut development costs by up to 3/4, And shorten the development time by 2/3 to only 6 months.

| More Cooperation and New Platform

NVIDIA & SoftBank

According to the conference, NVIDIA is partnering with Japanese telecommunications giant SoftBank to build a distributed network of data centers in Japan. It will provide 5G services and generative AI applications on a common cloud platform. The data center will use GraceHopper, BlueField-3DPU, and Spectrum Ethernet switches in a modular MGX system to provide the high-precision timing required by 5G protocols. The platform will reduce costs by increasing spectral efficiency while reducing energy consumption. These systems help explore applications in areas such as autonomous driving, AI factories, AR/VR, computer vision, and digital twins. Future uses could include 3D videoconferencing and holographic communications.

In addition, WPP Group, the world's largest marketing service organization in the UK, is working with NVIDIA to build the first generative AI content engine on OmniverseCloud, enabling creative teams to produce high-quality commercial content faster, more efficiently, and on a larger scale. Be completely consistent with your client's branding. Breakthrough engine based on NVIDIA AI and Omniverse connects creative 3D and AI tools from leading software makers to revolutionize brand content and experiences at scale.

NVIDIA Omniverse and WPP - Generative AI

Mark Read, CEO of WPP, said that generative AI is changing the marketing world at an alarming rate. WPP's cooperation with NVIDIA provides a unique competitive advantage that is not available to other customers in the market today. This new technology will transform the way brands create content for commercial use and reinforces WPP's position as an industry leader in the creative application of AI for the world's top brands.

Furthermore, a networking platform, NVIDIA Spectrum-X, designed to improve the performance and efficiency of Ethernet-based AI clouds, is announced. Based on network innovation, it tightly couples NVIDIA Spectrum-4 Ethernet switches and NVIDIA BlueField-3DPUs, achieving 1.7 times the overall AI performance and energy efficiency improvement compared to traditional Ethernet structures, and enhances multi-tenant functions through performance isolation. Consistent, predictable performance in multi-tenant environments.

The world's leading cloud computing provider is adopting the Spectrum-X platform to expand generative AI services. Spectrum-X, Spectrum-4 switches, BlueField-3DPU, and more are now available from system manufacturers such as Dell, Lenovo, AMD, and others.


XRender | Fast · Affordable · Reliable


Key Words

  • BagaPie
  • Unreal Engine
  • AE
  • Adobe
  • CG Magic
  • XRender
  • Render Tips
  • Blender
  • Wandering Earth
  • 3d animation
  • VFX
  • SIGGRAPH
  • 2023
  • XRender Client
  • Layered render
  • Master Ji Gong
  • CPU
  • XRender dedicated features
  • Cinema 4D
  • Imaxinaria
  • Vertex
  • 3D creation
  • Aspera
  • render online
  • Animated Film
  • CG
  • XRender update
  • Visual effects
  • Render service
  • A Writer's Odyssey
  • NVIDIA
  • GPU rendering
  • XCloud Disk
  • render farm
  • plugins for C4D
  • Animation
  • USD
  • Douluo Continent
  • After Effects
  • Capital Summit
  • add-on
  • Chaos
  • Timeout warning
  • Maya
  • Monster Run
  • Software update
  • Corona
  • Autodesk
  • 3ds Max
  • Intranet Submission
  • 3D Creation Plug-in
  • Render case
  • Maxon
  • Task Clone
  • TV series
  • plugin
  • V-Ray
  • Renderer
  • Zhen Dao Ge
  • Anima
  • sequence review
  • keyshot
  • What to watch
  • AI
  • Geometry node
  • free
  • The Infinitors