In-Depth Look at the AI Conference: Are Foundational Models Useless Without Applications? Will Autonomous AI Development Threaten Humanity?


Release time:

2024-07-19

July 4: 2024 World Artificial Intelligence Conference and Global AI Governance Summit Open

 

On July 4, the 2024 World Artificial Intelligence Conference (WAIC2024) and the Global AI Governance High-Level Meeting officially opened. Addressing global challenges in AI governance, the conference released the Shanghai Declaration on Global AI Governance, emphasizing the promotion of AI development, ensuring AI safety, and establishing governance frameworks. The declaration called on all parties to collaborate in harnessing AI for the benefit of humanity.

 

Foundational Models Focus on Real-World Applications

 

Leading foundational model providers such as Tencent, Alibaba, Baidu, iFLYTEK, and SenseTime showcased a range of application-oriented solutions at the exhibition. Many foundational models have moved from theory to practice. Compared to past years, when companies primarily competed on parameter size, the focus in 2024 has shifted to practical deployment.

 

At the Industry Development Forum on July 4, Baidu’s founder, chairman, and CEO Robin Li urged the industry to prioritize applications over model parameter competition.

 

“I see many still fixated on foundational models—benchmarking, leaderboard updates, or announcing they’ve surpassed GPT-4 or introduced new iterations like GPT-4o. Today, there’s an ‘earth-shattering release,’ and tomorrow, an ‘epic update.’ But I ask: where are the applications? Who is benefiting?”

 

Li emphasized that intelligent agents are the most promising direction for AI applications, with search serving as their primary distribution channel. For example, during the recent college entrance exam season in China, some companies used large models to generate exam essays, which Li deemed of little practical value. Instead, he argued that students needed intelligent tools to assist with post-exam tasks, such as choosing schools and majors.

 

However, Li warned against falling into the “super app trap,” asserting that highly capable applications are more valuable than apps focused solely on driving daily active users (DAU).

 

Similarly, SenseTime’s chairman and CEO Xu Li pointed out that AI has not yet reached a “super moment” because it has not significantly transformed vertical industries. Xu described current large models as “memory systems” and emphasized that breakthroughs in specific fields require constructing higher-level reasoning chains.

 

Highlighting the need for tailored solutions, Hu Shiwei, co-founder and president of Fourth Paradigm, stressed the importance of addressing industry-specific needs to perfection.

 

“As industry models increasingly cover various sectors, they will converge to create a powerful ecosystem. This is our unique path to AGI.”

 

Chinese companies have already demonstrated advantages in specific AI application scenarios, reflecting their strengths in both technology and deployment.

 

Balancing AI Development and Governance

 

Generative AI, exemplified by foundational models, is driving a new wave of global technological innovation. Industry leaders believe that every sector will eventually be reshaped by large models. However, this rapid technological progress poses governance challenges, especially when regulation lags behind.

 

The conference placed a strong emphasis on addressing these governance challenges, calling for collective action to ensure AI benefits all of humanity.

 

At the opening ceremony, Xue Lan, dean of Schwarzman College and director of the Institute for AI International Governance at Tsinghua University, outlined the risks posed by AI, including technical “hallucinations,” data security issues, and algorithmic bias. Looking ahead, he warned that autonomous AI development could potentially threaten human society, a risk that should not be overlooked.

 

Xue highlighted China’s comprehensive governance framework, which includes rules for algorithmic and data governance at the technical level and tailored regulations for specific scenarios, forming a multidimensional and multi-layered system.

 

Zhou Bowen, director of the Shanghai AI Laboratory and a Tsinghua University professor, noted that the rapid advancement of generative AI has introduced a range of risks.

 

“The public’s concerns start with data breaches, misuse, and privacy or copyright violations. Next are risks tied to malicious use, such as generating fake information. Ethical issues, including bias and discrimination, as well as potential challenges to employment structures and societal systems, are also worrying. Sci-fi scenarios where AI spirals out of control and humans lose autonomy further fuel these fears.”

 

Zhou argued that safety advancements in AI models have lagged far behind performance improvements, creating an imbalance due to disproportionate investment in the two areas.

 

“Some risks are already apparent, while others remain potential threats. Mitigating these risks requires collective effort, thoughtful design, and significant contributions to ensure AI’s balanced and sustainable development.”

Key words: