News Releases

Innovative Multi-modal AI Interactions Unveiled by Soul App at 2024 GITEX GLOBAL

DUBAI, UAE, Oct. 17, 2024 /PRNewswire/ -- By reading one's behaviours and preferences, a digital twin exclusively belonging to oneself can be replicated, enabling interaction that breaks the barrier between making friends and obtaining companionship. Nowadays, the scenarios depicted in science fiction movies are moving towards reality.

From October 14th to 18th, the leading Chinese social networking platform Soul App is participating in GITEX GLOBAL with its latest self-developed multimodal large model, which possesses characteristics such as 3D virtual human replication, human body movement restoration, response recommendation, voice call, and multimodal understanding.

Soul App's CTO Tao Ming stated, "At GITEX GLOBAL, we anticipate communicating with technology companies from both domestic and international, presenting the company's latest application practices in the social field and innovative solutions in digital entertainment, and jointly exploring new possibilities for social development."

Digital Twin: Innovative Interaction between Virtual and Reality

GITEX GLOBAL has been held to its 44th edition. This year, it has once again escalated in scale, spanning two large venues - Dubai World Trade Centre and Dubai Harbour. With over 6,700 global technology giants and innovative start-ups participating, it showcases the most breakthrough innovative technologies in the AI field and gathers the most cutting-edge technological trends and leads the new direction of industry transformation.

As one of the representatives of Chinese Internet platforms that introduced AI into social relationships, Soul is demonstrating its AI technologies and the latest applications in social scenarios. This is also Soul's first appearance at a major international exhibition.

Soul unveiled its beta-testing feature "Digital Twin" at GITEX GLOBAL.

The digital twin represents an intelligent communication assistant for the users to make friends and develop in-depth relationships. It ensures that the recommended reply is appropriate for the conversation context and that the reply itself is fit for the user's persona and based on the diction, tone and style they use in daily life.

The technology addresses the pain points users frequently encounter when socializing by offering icebreakers, chat topics and suitable response options.

Besides, Soul plans to introduce a new feature to the digital twin, enabling it to automatically interact with other users or their digital twins when the user is offline. Judging from the conversations that the digital twin had with others, users can determine whether they can make good friends and continue from there themselves.

During this process, the users' images hold the key to shaping vivid and dynamic digital personas. Thus, Soul would encourage users to upload their photos to create digital replicas capturing their facial features.

At GITEX GLOBAL, the on-site visitors could instantly generate 3D virtual humans and experience immersive multi-modal interaction through real-time motion capture and restoration.

Multi-modal End-to-End Large Model: Super Anthropomorphic Emotional Companion Experience

Since its launch in 2016, Soul has not encouraged users to upload photos in real life to alleviate social pressure. In 2022, Soul integrated technologies such as AI, rendering, and image processing and launched its self-developed NAWA engine, providing technical support for users to create customized 3D social images and scenes. With this engine, users can independently create vivid avatars and showcase each person's distinct personality.

Drawing on past technical reserves and breakthroughs in the R&D progress of its self-developed large models, at present, Soul's 3D virtual human capabilities have been comprehensively upgraded, forming a mature AI interaction solution - that is, a multi-modal large model direction integrating text, voice, and action interactions, achieving a more human-like interactive experience and more efficient, natural, and rich-dimensional information transmission.

Soul has been continuously delving into the direction of multi-modal large models.

After launching the intelligent recommendation engine "Lingxi" and using algorithms to assist in the discovery and sedimentation of social relationships, in 2020, Soul officially initiated the R&D work on AIGC technology, systematically progressing the development of AIGC technologies and facilitating the implementation of AI capabilities in various social scenarios.

Currently, Soul has successively launched its self-developed large language model Soul X. In June this year, Soul also launched its self-developed end-to-end full-duplex voice call large model, featuring characteristics such as ultra-low interaction delay, rapid automatic interruption, ultra-realistic voice expression, and emotion perception and understanding capabilities.

It can directly understand the rich sound world, support super anthropomorphic multi-style languages, and achieve interaction dialogues closer to daily life and "human-like" emotional companion experiences. This marks Soul's innovative breakthrough in human-computer interaction experiences. At the same time, the modal upgrade from text to voice to vision also implies a revolutionary change in interaction methods.

Next, through the latest AI interaction solution integrating 3D virtual human capabilities and multi-modal end-to-end large model, the 3D digital twins independently created by Soul users can serve as multi-modal all-round assistants in the digital world, fully empowering users' relationship discovery, establishment, and sedimentation processes in rich social scenarios such as Audio Partyroom and Soul Square on the platform.

While expanding new relationships, it provides high-quality, interesting, and immersive human-computer interaction experiences and brings genuine and natural emotional companionship.

SOURCE Soul App

For further information: Liz Ngan, yanlinlin@soulapp.cn