[Op-ed]
Human 5.0 requires guardrails for AI-mediated belonging
[ Date Published ]
2 March 2026
[ Focus area ]
SocietyMalaysia has embraced the ambition of human 5.0 as part of its broader digital economy agenda. The concept of human 5.0 reframes digital transformation by placing people, rather than technology, at the centre of innovation and economic growth. It focuses on using advanced technologies in ways that support and enhance human capabilities, while ensuring ethics, trust, inclusion and societal wellbeing remain core priorities. This aligns with the National Artificial Intelligence Roadmap (AI-Rmap 2021-25), which aims to harness AI to boost economic growth while embedding principles like fairness, transparency and accountability into national digital transformation. Together, the human 5.0 and AI-Rmap vision imagines technology enhancing human wellbeing, strengthening social cohesion and supporting inclusive growth. In a country navigating rapid digitalisation alongside deep social diversity, the appeal is obvious. Yet beneath this optimism lies a quieter shift that deserves strategic attention. As AI increasingly mediates how Malaysians interact, trust one another and form a sense of belonging, the social fabric itself may be changing faster than our governance frameworks can keep pace.
AI social agents are no longer futuristic concepts. They already operate within Malaysian digital life. Algorithmic moderators shape discourse in WhatsApp and Telegram groups. AI-driven tools influence workplace performance management and internal communications. Chatbots now provide mental health support, community guidance and even moral framing in online spaces. These systems are designed to encourage participation, reduce conflict and promote behaviours deemed healthy or constructive. Technically, they work well. Socially, their growing role positions them not just as tools but as architects of group norms.
This matters because belonging is not a peripheral human concern. Social identity influences how individuals interpret information, align values and relate to institutions. In Malaysia, social cohesion has always depended on negotiation across difference: ethnicity, religion, language and culture. This negotiated peace, often described as delicate but resilient, is sustained through ongoing dialogue, friction and compromise. Disagreement is not a failure of cohesion. It is how trust is built and recalibrated.
AI social agents alter this dynamic in subtle but consequential ways. Unlike human participants, these systems are optimised, consistent, emotionally calibrated and aligned to predefined objectives. Whether moderating discussions or nudging behaviour, they reward certain expressions and quietly discourage others. Over time, this can produce what may be called an algorithmic in-group: digital communities that feel harmonious and supportive, yet whose boundaries and norms are shaped more by opaque design choices than collective and conscious deliberation. When the norms of Malaysian belonging are increasingly mediated by algorithms designed outside our social and cultural context, the ability to define social harmony risks being indirectly shaped by external priorities. For example, during the 15th general election and in its aftermath, social media became a key space for political communication, with algorithmic feeds helping certain narratives, partisan content and polarising messages reach wide audiences rapidly. Reports on the election highlighted widespread disinformation and negative narratives on platforms like Facebook and Twitter, where algorithm-driven engagement often amplified content that aligned with users’ existing views, reinforcing echo chambers and shaping public sentiment in ways that complicated national discourse.
From a software development perspective, this is often framed as optimisation. Reduce toxicity. Increase engagement. Improve wellbeing indicators. These objectives are understandable and frequently well intentioned. However, optimisation logic struggles with pluralism and moral ambiguity unless explicitly designed to accommodate them. Most systems prioritise stability over productive tension and predictability over exploration. In a society as diverse as Malaysia’s, this creates a risk of false consensus: agreement produced not through understanding but filtering.
This has direct implications for trust. Trust in social systems depends on reciprocity and agency. People trust one another because they recognise shared vulnerability and the possibility of disagreement or failure. AI agents simulate social presence without participating in this reciprocal moral space. When users are not fully aware of this asymmetry, trust becomes misaligned. Individuals may feel socially supported while responding primarily to optimisation logic rather than human intent. When these users later encounter the unfiltered, non-optimised real world, the resulting dissonance can deepen polarisation.
For Malaysia, this is not merely a social concern but a strategic one, where social cohesion functions as a core pillar of national stability and security. The management of race, religion and royalty, the so-called 3R issues, has always required careful human judgement and contextual sensitivity. If AI systems begin to mediate belonging by prioritising comfort over connection, they risk hardening digital silos along ethnic, linguistic or ideological lines. While these risks do not originate from AI alone, algorithmic systems can amplify existing social fractures by accelerating norm reinforcement at scale. The result is not overt division but quiet fragmentation, making the long-standing national aspiration of Bangsa Malaysia more difficult to realise.
Human 5.0 – in the Malaysian context – must therefore be understood as more than smart cities or digital productivity. It is about digital sovereignty and social resilience. A society that allows external datasets, opaque algorithms or commercially driven optimisation to shape social norms without oversight risks ceding a form of social governance by default. Against this backdrop, several strategic imperatives emerge for Malaysia’s AI governance architecture.
First, transparency must be treated as a right, not a courtesy. Malaysians should be able to identify clearly when they are interacting with AI social agents, understand the role these systems play and have visibility into the objectives guiding their behaviour. This includes moderation logic, behavioural nudges and emotional framing. Without it, individuals cannot reasonably assess how their trust or sense of belonging is being shaped. The AIGE guidelines already suggest that users should have the right to know when they are interacting with an AI. However, because it is a guideline, companies adopt it as a best practice or a “courtesy” rather than a legal requirement. The upcoming AI Governance Bill (expected to be tabled by mid-2026) needs to move beyond simple AI labelling and require developers to provide explainability reports. This would allow users to see the logic behind a decision which may enhance the trust factor.
Second, AI-mediated social influence should be recognised as a strategic risk domain within national AI governance. Current governance frameworks like ONSA focus on harmful content. However, systems designed to influence group norms, emotional expression or identity-related behaviour operate at a different level of impact. These are not just ethical risks; they are potential strategic threats to social cohesion. An expanded vision should explicitly include the social architecture effects of AI systems on identity formation and trust. This could be achieved through Social Impact Assessments (SIA) embedded in the National AI Technology Action Plan (2026-30), complementing technical safety assessments. Further, a Social Cohesion Stress Test for AI agents could be developed. Before a mental health chatbot or an automated community moderator is deployed at scale, it should be tested against its ability to handle Malaysian-specific cultural distinctions.
Third, preserving the human-in-the-loop must be a deliberate policy choice. Not every social problem is a technical problem. Disagreement and discomfort are not indicators of system failure. In many cases, they are essential to democratic resilience and social learning. Hybrid governance models, where AI supports human judgement rather than replaces it, should be the default in sensitive social domains. Automation should assist deliberation, not silently override it.
Fourth, accountability for AI social agents must remain clearly human and institutional. When AI systems shape behaviour or reinforce norms, responsibility cannot be abstracted into design processes or technical complexity. Clear lines of accountability between developers, deploying organisations and regulators are essential. Embedded values must remain contestable, auditable and correctable.
Finally, Malaysia has an opportunity to lead at the regional level. Integrating psychological and sociological expertise into AI governance bodies and policy units, such as the National AI Office (NAIO) and Ministry of Digital, would help ensure that social outcomes are considered early in the design and deployment of AI systems. An understanding of how identity, trust and belonging operate at scale is not a peripheral concern when AI is deployed into social spaces. It is foundational to responsible governance.
Human 5.0 should not evolve into a future where belonging is frictionless but shallow or where trust is widespread yet misplaced. Technology can and should support human connection, but it must not quietly displace the human processes through which connection gains meaning. If AI social agents are allowed to engineer social cohesion without transparency, accountability and strategic oversight, Malaysia risks trading authentic agency for managed stability.
The policy question, then, is no longer whether AI social agents are effective. Their effectiveness is clear. The more consequential question is whether their influence aligns with the kind of Malaysia we seek to sustain – resilient, inclusive, and united across difference. If handled well, human 5.0 offers Malaysia the chance to set a regional benchmark for human-centric AI governance, demonstrating that social resilience and technological ambition need not be in tension. Embedding psychological and sociological expertise into AI governance structures would help ensure that systems shaping social interaction are designed with a deep understanding of human behaviour, group dynamics and cultural context, rather than driven solely by technical or economic optimisation.
Disclaimer: The views and opinions expressed in this op-ed are those of the author(s) and do not necessarily reflect the views of the Centre for Responsible Technology (CERT), the Institute of Strategic & International Studies (ISIS) Malaysia or the Malaysian Communications and Multimedia Commission (MCMC).