Nvidia’s bringing its AI avatars to games and they can interact with players in real-time. With voiced dialogue. And facial animations
Nvidia has just announced ACE for Games, a version of its Omniverse Avatar Cloud Engine, to create, power, and give a voice to game NPCs in real-time.
CEO Jensen Huang explained that ACE for Games integrates text-to-speech, natural language understanding—or in Huang’s words, “basically a large language model”—and facial animation. All under the ACE umbrella.
Essentially, an AI created NPC will listen to a player’s input, for example asking the NPC a question, and then generate an in-character response, say that dialogue out loud, and animate the NPC’s face as they say it.
Huang also showed off the technology in a real-time demo crafted in Unreal Engine 5. It’s set in a cyberpunk setting, because of course it is (sorry, Katie), and shows a player walk into a ramen shop and talk to the owner. The owner has no scripted dialogue but responds to the player’s questions in real-time and sends them off on a makeshift mission.
It’s pretty impressive, and undoubtedly a look into how games may utilise this technology in the future. As Huang said, “AI will be a very big part of the future of videogames.”
Of course, he would say that. Nvidia is the company most set to gain by the sudden surge of AI demand with sales of its AI accelerators. And we have seen some basic integrations of ChatGPT into games already, like when Chris added it to his Skyrim companion and it failed to solve a simple puzzle. But this new ACE platform does appear a lot more polished and properly real-time.
What we don’t know is what it took to run the ACE for Games demo. It could require more than your average GeForce GPU to run right now, or require a cloud-based component. Huang was a bit light on the details, but I’m sure we’ll hear more about this tool as some games actually make moves to use it. So far, no word on any that will, but I’d be keen to see this in action outside of a demo.