[Op-ed]
Can responsibility exist without obligations?
Responsibility is a difficult word to pinpoint, especially in connection with technology – which is complex and “lacking” the human values that could be blamed for incidences. Aristotle and the Kantian school of thought state that responsibility is only possible with knowledge of moral implications and free will. The absence of free will would complicate attribution as responsibility could be diluted by factors beyond its control.
Obligation, on the other hand, refers to a commitment bound by legal or moral expectations. Its source may vary from ethical principles to social contracts. Such systems could be seen in the technology governance landscape today, where principles for data management or corporate ethics serve as the moral compass for technology development. In the Kantian school of thought, satisfying obligations despite limitations, is a high expression of moral responsibility and virtue.
Morality is a feature in both assessments to seek alignment between desire, duty and happiness. In philosophical discussions, Aristotle describe morals as the gold standard of virtues, whereby the latter should contribute to human wellbeing or happiness. Meanwhile, Kant’s moral philosophy anchors on a universal moral law, which obligates duties, is reflected by reason and acted upon without instructions. It can be debated if a universal morality exists in technology governance.
OECD’s 38 members identified six foundational values and six technology specific values in technology governance. These include safety and security, privacy, transparency and responsiveness. The UNESCO Recommendation on the Ethics of AI, adopted by all 194 members of the United Nations, includes obligations to the environment and principles of “to do no harm”.
Yet, it is worth asking if international moral systems translate to responsibility or obligations. While UN human rights treaties and laws could be referred to for moral standards, adherence is more problematic, especially as countries pick and choose priorities. Technology governance is riddled with philosophical assumptions that absolve one party or the other from responsibility.
One assumption is that technology can be regarded as neutral, where it is seen merely as a tool, with nary a value, such as good, bad, right or wrong. The impact of technologies could also take a generation to manifest as data for cause and effect are collected after the fact to establish responsibilities.
The social media platform, Facebook, for instance, was introduced in 2004, yet attribution of responsibility only arose in the past decade. A leaked internal study by Meta, Facebook’s parent company, dated 2019 stated that one of its apps, Instagram, worsened body image issues for one in three teenage girls. In efforts to build a safer community, Instagram offered features, such as options to hide “likes”. Meanwhile, the 2019 terror attack in Christchurch and increasing studies on cyberbullying linked to social media or messaging services illustrated the bias in algorithms and their role in disseminating harms.
Technology, developed in teams and consisting of billions of lines of code, could dispute having sufficient knowledge and intention for harm. Thus, in many cases, the technology and its operators could not be held liable for the intentions of actors generating content on platforms or information shared through channels. However, Meta’s failure to stop the distribution of counterfeit ads on its platform in 2023 damaged the reputation of safe harbour, the legal provision that shields platforms from liability or penalties across various areas of law. This illustrates that with sufficient research and data, the foundations of causal responsibility could be attributed, which may close gaps on dismissing responsibility.
Additionally, there are believers that digital spaces are reflections of reality and in the quest to preserve free will, users should have the right to choose their digital experience. Free will is a key component of responsibility, where accountability could not be attributed fully if actions are outside of human will. In this case, platforms would develop safer tools but rely on users to opt-in for fears of tampering with realities. Users must curate their digital realities for their safety and privacy comforts, which could address resilience on a wider spectrum of activities.
However, this also means that users bear the burden of responsibility to keep their digital experience safe and clean. This is subjected to awareness and vigilance. The United Kingdom’s online nation report stated that UK smartphone users use an average of 41 apps in a month. Should these apps be non-native, a user would have to navigate through various features to activate the components that ensure a baseline of safety across experiences.
Oftentimes, internal efforts are principles based, such as Microsoft’s responsible AI principles or IBM’s Principles for Trust and Transparency. These aim to balance ethics and safety with free market and innovation, to fulfil national development goals and commercial aims in the private sector. Depending on the platform, the private sector could introduce community guidelines and internal rules to be enforced on users. However, these approaches tend to be loose frameworks where accountability can be ambiguous. The fluidity means abiding by soft laws is framed as a “choice”, especially if domestic or legal legislation do not state duty and expectations. An example is X’s recent AI offerings, Grok where X’s community policy guidelines clearly stated zero tolerance for child safety violations, yet X provided only a pro forma response when pressed for explanations and assurances of reform.
In the current landscape, legal instruments are needed to clarify responsibility and obligations. Laws like the European Union’s AI Act or the UK, Australia and Malaysia’s Online Safety would also arise to specify expectations for platforms or technology operators. Malaysia is also aiming to table an AI Act, after tightening the Communications and Multimedia Act in addition to the Personal Data Protection Act.
Conclusion
So, can responsibility exist without obligations? The short answer is “yes”. On a theoretical level, Aristotle states that responsibility is akin to attribution. Obligation in common vernacular is a required course of action due to legislation or moral systems. The existence of both could be separate though there are similarities in the process leading to action. If obligations are tied to a Kantian sense of duty, both require steps that reinforce reasoning. These are steps that reflect on desires and universal virtues where the ultimate responsible human would find that desire, duty, virtues and happiness are all aligned.
However, reality begs to differ. While the process to shoulder responsibility can be reflective internally, in practice, rules and laws are needed to assign responsibilities. In the transboundary and interconnected technology stack, clarity provided by legal structures are needed among a multitude of stakeholders. Furthermore, as capital gains are prioritised over safety measures, responsibilities could be redirected. This leads to the current decades of responsibility upheld by legal obligations.
But here is the catch: while law is seen as the silver bullet to clarify ambiguities, legislation written without sufficient inflections may crystallise inaccurate governance mechanisms. This would impact on innovation, which explains where nations – sans the EU – have been hesitant to regulate. In addition, gaps in ownership of responsibility would redirect harms as someone else’s problem. This allows social ails to fester when there could be parties with solutions. However, the technology governance space is filled with contradictory actors in constant competition, be it private sector against private sector or government against private sector. In an ideal reality, self-agency for responsibility is possible, with communication and ownership intended to build a safer cyberspace. Yet, in the current landscape, negotiations are as much a part of obligations lest responsibility as a value is weakened.
Disclaimer: The views and opinions expressed in this op-ed are those of the author(s) and do not necessarily reflect the views of the Centre for Responsible Technology (CERT), the Institute of Strategic & International Studies (ISIS) Malaysia, or the Malaysian Communications and Multimedia Commission (MCMC).