[Op-ed]
Sunlight is the best disinfectant
In Franz Kafka’s The Trial, the protagonist, Joseph K., is arrested without warning and dragged into a labyrinthine legal process governed by faceless officials and obscure rules. Despite his best efforts, he never learns the charge against him. Yet, the trial nonetheless grinds forward, inexorably, towards his execution.
Authority and power in Kafka’s world are exercised through opacity. Decisions are made, consequences imposed, yet explanations are never offered. Viewed critically, the deepest harm lie not only in the injustice Joseph suffers, but in his inability to question or contest power to begin with. Through his lack of knowledge and information, he is denied agency.
This logic is prescient of our technological moment. While allegory cannot substitute for evidence for policymaking, it nonetheless has value. Kafka’s work, in particular, gives us a lens to reflect on the warnings it offers – and on why we need to reconsider our relationship with the power brokers of technology.
Consider how much of our daily life is now influenced by unseen algorithms we neither see nor fully understand. Take social media as an example. It remains unclear how its opaque recommender algorithm works – what content it prioritises, deprioritises, amplifies, and removes. Even for experienced content creators, success is guided less by understanding at best and by guesswork at worst.
If we were to accept the analogy of social media platforms being the twenty-first century’s digital public square, should it then not follow that the public deserves to know whether they have equal voice within it? Viewed this way, it is clear that algorithms affect not just content consumption, but the exercising of substantive free speech itself.
This same logic can similarly apply to a whole host of other services that rely on decisions using algorithms. This includes what we get shown on e-commerce marketplaces, how credit scores are calculated, and what prices are dynamically set for everything from e-hailing rides and flight tickets to health insurance, with varying degrees of concern.
But beyond algorithms, this lack of transparency also characterises other aspects of technology today. Take data centres for example. It is widely known that they are resource intensive – requiring an enormous amount of energy to run its servers, and a vast quantity of water to cool them. Yet, even basic figures on exactly how much energy and how much water are consumed, and the scale of their emissions can be hard to come by.
Relatedly, data centre operators’ push towards adopting renewables have been equally opaque. It is a common public relations strategy to claim that data centres use renewable energy and recycled water as part of their sustainability efforts. Yet without concrete details, it remains unclear what proportion of their operations these measures actually account for, and whether these claims are to be believed.
The scarcity of publicly available information also constrains the scope and quality of independent research. In practice, researchers are often left with little choice but to partner directly with companies in order to study their operations and impacts.1 This dependency inevitably shapes the types of research that can be undertaken, favouring lines of inquiry that are more palatable to corporate interests by virtue of requiring company consent – consciously or otherwise.
As a result, researchers are often unable to independently and rigorously examine the myriad impacts, trade-offs, and harms of these technologies. This, in turn, limits the degree of confidence with which causal links can be established. Such constraints undermine governance frameworks premised on evidence-based policymaking, as the very evidence required to inform decisions remains locked behind corporate walls.
And even when companies do disclose information, these disclosures are partial and selective. Perhaps the closest thing to a transparency regime today lies in the quarterly transparency reports published by major social media platforms.
Yet, the information disclosed in these reports is neither sufficient to meaningfully scrutinise the effectiveness of the social media platform’s safety efforts, nor does it include any impact assessment of the impact of platform design and features on externalities such as the erosion of trust, the degradation of public discourse, and negative psychological and mental health outcomes.
Disconcertingly, this playbook may be further entrenched by developments in the AI industry. Without meaningful disclosure on the data used to train these models, it becomes difficult to assess their risks, the adequacy of safeguards taken to mitigate them, as well as their broader societal and environmental impacts.
The opacity is compounded by two factors. First, deep learning systems are already inherently prone to functioning as black boxes by virtue of how they are trained. Second, the hypercompetitive trajectory of the industry which treats secrecy as strategic, in sharp contrast to the open-source, collaborative ethos that characterised much of its earlier years. Further complicating matters is the extent to which AI development is being subsumed within Washington’s geostrategic competition with China, with transparency surrounding American models often being among the first casualties to be sacrificed at the altar of national security.
With Kafka in mind, it is exactly these information asymmetries that must be addressed if power structures are to be contested rather than endured. Louis Brandeis, the former US Supreme Court justice, famously observed that sunlight is the best disinfectant – a remark on the need for transparency in governance, which is as relevant then as it is today.
Viewed in this light, the need for meaningful transparency becomes self-evident. This represents a necessary paradigm shift and rebalances power asymmetries away from companies that are currently able determine, unilaterally, what information to disclose and on what terms. Meaningful transparency, in this sense, is defined by companies disclosing information that enables effective scrutiny in service of the public interest.
It therefore follows that the parameters for such reporting – including scope, frequency, and format – should be set by institutions that are both democratic in nature and accountable to the public. Crucially, transparency regimes should not mean indiscriminate data dumps. Disclosures must be proportionate in size, intelligible in quality, and accessible in format to meet the established public objectives.
To conclude, it is worth noting that greater transparency alone will not solve all woes associated with technology. Nonetheless, it is a necessary precondition towards understanding its exact impact on our society, forming the basis of evidence-based policymaking moving forward.
For those who cynically ask what companies stand to gain from all this, the answer is the ability to show that their systems are not merely branded as trust and safety, but are in fact genuinely trusted and demonstrably safe.
1 There is also the option of undertaking independent primary research, but this is often resource intensive and costly.
Disclaimer: The views and opinions expressed in this op-ed are those of the author(s) and do not necessarily reflect the views of the Centre for Responsible Technology (CERT), the Institute of Strategic & International Studies (ISIS) Malaysia, or the Malaysian Communications and Multimedia Commission (MCMC).