Connect with us
close

Op-Eds

Cyberspace: The New Battleground for Competing Norms

4 min read.

A proposal by Russia that the United Nations should consider a global cybercrime treaty has been adopted with the support of 30 African countries, raising concerns that Moscow’s known preference for state cyber sovereignty will prevail in ways that give countries regulatory freedom to stifle political opposition or citizen dissent.

Published

on

Cyberspace: The New Battleground for Competing Norms

Early in July 2021, cyber attacks originating from Russia prompted US President Joe Biden to call for action from Moscow. This, Biden said, was conveyed to Russian President Vladimir Putin during an hour-long phone call. While the Kremlin denies the US even contacted Moscow about the attacks, recent events have promoted debate around the responsibility of state actors, including Russia, in cyberspace.

That country’s attempts to promote or resist norms around traditional global governance areas are well documented. It is known to offer a more conservative approach towards issues of human rights and military intervention, for example. And now it is under scrutiny in newer areas of contestation, including cyber governance and cyber security.

Over the past five years, Russia has become an active promoter of cyber governance norms. As it continues to push its cyber proposals on the international stage, where does Africa stand? Do growing relations between Africa and Russia mean they always share the same stance?

‘Splinternet’ or global infrastructure?

Moscow’s cyber norm promotion is closely linked to its national interests. Russia seeks to reclaim its stature as a global power (including in the technology landscape), but is also interested in how cyberspace can be harnessed for domestic purposes.

In deciding whether the internet should remain a global infrastructure or become a “splinternet” (controlled nationally), Russia and China are proponents of cyber sovereignty. They argue that countries should manage their own cyberspace and that the internet should be bordered and thus restricted.

This has led to a range of concerns around internet freedoms, from the censorship of political content online to large-scale internet shutdowns (a practice that has gained traction in some parts of Africa, Asia and the Middle East, especially around elections or public protests). While traditionally opposed by the US and other democracies, the ability to confront cyber threats, conduct surveillance and enforce regulations on harmful content such as child pornography and terrorist propaganda, means the idea of cyber sovereignty is gaining ground in the Western world too.

In promoting this cyber norm, Russia seeks to pull as many countries as possible into its orbit to enhance its soft power capabilities. At the UN in 2018, a Russian-proposed working group, open to all UN member states, garnered the support of 109 countries. Many of these countries were African, demonstrating international interest in discussing cyber norms in terms favourable to Russia.

Of the working group’s initiatives, capacity-building efforts to enhance countries’ abilities to protect their ICT environment may particularly appeal to African states who perceive themselves as lagging. Indeed, the rise in cybercrime — with critical national services often affected — has seen cyber security become an issue of international concern.

African support for Russian cybercrime resolutions

Russia is a major supporter (and sponsor) of several international cybercrime resolutions at the UN. In December 2018, a Russia-backed resolution that required the UN Secretary-General to collect countries’ views about cybercrime was adopted by a majority vote. Of the 88 countries that voted in favour, 32 were African. Only four African countries — Botswana, Ghana, Morocco and South Africa — submitted their views, but all four listed lack of state capacity and lack of international consensus as major challenges in combating cybercrime. These and other views were summarised into a report for consideration by the General Assembly.

Moving the ball forward once more, in December 2019, Russia succeeded in pushing through a UN General Assembly resolution that aimed to create a negotiating platform, under UN auspices, for the consideration of a new cybercrime treaty. This move was strongly opposed by the US which expressed concerns that this resolution would stifle existing global anti-cybercrime efforts. But with 79 votes in favour, including 30 from Africa, the resolution was adopted. Officers were elected to the ad hoc committee in May 2021 and it has been agreed that six negotiating sessions will take place before the possible adoption of a treaty.

One of the major concerns with Russia’s resolution is its vagueness around the definition of cybercrime. Not only could this lead to legal uncertainty among countries, but could perhaps provide Russia with the regulatory room it needs to stifle political opposition or citizen dissent. A month before Russia’s UN resolution was passed, amendments to domestic legislation allowing the government to block internet traffic from outside Russia came into force. Human Rights Watch said the laws undermined freedom of expression and privacy.

How do Africa’s own cybercrime initiatives compare with Russia’s international efforts?

“A global governance system will be important,” Tomiwa Ilori, researcher at the University of Pretoria’s Expression, Information and Digital Rights Unit, told SAIIA. But African countries need to be wary of external influence, he said. “When deciding on a framework, a human rights-based approach should be used.”

An African Union Convention on Cyber Security and Personal Data Protection was adopted in 2014, but has yet to meet the minimum number of ratifications required for it to come into force. The convention references the need for regulatory frameworks to respect the rights of citizens, but it does not establish a framework for all member states. Instead, it encourages signatories to draft their own legal, policy and regulatory measures to manage cybercrime.

Almost 40 African countries have introduced legislation that deals with cybercrime. Some of the laws, like Russia’s UN resolution, are vaguely worded while others are similar to the European Union’s General Data Protection Regulation — an earlier attempt to establish uniform cyberspace policies across countries.

This tells us that as a continent of 54 states, African views on cyber governance are not homogenous. And while many share a preference for cyber sovereignty, particularly as a means to quash political dissent, African countries do have some level of agency when it comes to adopting a model. With cyberspace fast becoming the new battleground for competing norms and influence, there is also a role for civil society in Africa to continue advocating for cyber freedoms.

This article was first published by the South African Institute of International Affairs (SAIIA).

Support The Elephant.

The Elephant is helping to build a truly public platform, while producing consistent, quality investigations, opinions and analysis. The Elephant cannot survive and grow without your participation. Now, more than ever, it is vital for The Elephant to reach as many people as possible.

Your support helps protect The Elephant's independence and it means we can continue keeping the democratic space free, open and robust. Every contribution, however big or small, is so valuable for our collective future.

By

Cayley Clifford is a staffer at the independent public policy think tank, The South African Institute of International Affairs (SAIIA).

Op-Eds

Hunger in the Heart of Empire: Pellagra in the United States

Institutionalized poverty led to an outbreak of pellagra that Americans would rather forget. But grain farmers remember; they know that hunger is good business.

Published

on

Hunger in the Heart of Empire: Pellagra in the United States

“We are still surprised by the prevalence of . . . food shortages . . . 3,500 years after the Pharaohs worked out how to store grain.” The Dictator’s Handbook

The United States might be the last place you’d expect to hear about malnutrition that killed hundreds of thousands of people in the last century. Most Americans have already forgotten it, but a disease called pellagra—a niacin deficiency that causes dementia, diarrhea, and victims’ skin to roughen, crack, and eventually peel off—ran rampant in the United States for forty years.

Like most hunger, America’s pellagra nightmare didn’t just happen. It was allowed to fester for four decades because hunger suited the men in power. It disappeared once mass hunger became an inconvenience for America’s elite: when the US needed millions of soldiers in top physical shape for World War II. The United States’ long experiment with pellagra holds lessons for how its own internal hunger politics still work today; how it extends those hunger politics outward into “famine relief” efforts; and how food reform efforts in today’s America are still trapped in politics of denial about our own past.

America’s pellagra outbreak was a departure from earlier ones in Spain and Italy. In Europe, pellagra came from peasants trying to use the newly introduced crop of maize the same way they used wheat: by grinding dry grain into flour and using it to make breads and porridges. By contrast, European newcomers in what became the United States learned how to process maize from indigenous communities like the Chickahominy because for the first decades they were economically dependent on these communities. They ground it wet after a long soak in water and hardwood ashes. This process is today called nixtamalization, from the Aztec language (Nahuatl) nixtamalli for “ash dough”. It makes the kernels swell, shed their tough coats, and become soft and easy to grind into a dough by hand. The process also renders the niacin in maize digestible for the human body. Wherever maize becomes a staple crop, without the soaking process, pellagra often follows.

That is why America’s pellagra outbreak was unusual. Americans knew what nixtamalization was. Even today, traditional American dishes like hominy and succotash are often made with nixtamalized corn.

So how did America’s pellagra outbreak happen?

The easy answer is technology. In 1901, a device called Beall’s corn degerminator was invented. The “germ” is the tiny plant embryo inside the seed. It contains most of the seed’s perishable oils. With the “germ” removed, grain can be pre-ground in large central facilities, shipped long distances and stored for long periods without going rancid. Unfortunately, the germ is also where most of the niacin in maize is found. Even if nixtamalized, this degerminated and pre-ground grain would still provide very little niacin to the diet.

So that’s the easy version of America’s pellagra story. Industrial technology and railroads created the long-distance trade of pre-prepared foods. While convenient, these foods were not nutritious. Even though Americans have mostly forgotten the pellagra outbreak, the “technology” interpretation of its cause still survives in America’s food reform movements today: Pre-prepared foods cause disease. The cure is to eat fresh foods made from scratch. Using pre-made and “convenience” foods is still seen in the US as a sign of poverty, laziness, and indifference: inviting sickness through your own lack of diligence.

The United States’ long experiment with pellagra holds lessons for how its own internal hunger politics still work today.

But that is not the whole story. Niacin is not found in corn only. It comes from poultry, fish, beef, beans, and nuts. The problem wasn’t that people were eating pre-ground corn. It was that they weren’t eating much else. Poor people’s diet in 19th and early 20th century America was almost exclusively pre-ground corn, salt pork, and molasses: three things that are all low in niacin. The problem wasn’t pre-ground corn; it was poverty. And not even just poverty but a specific type of institutionalized poverty where wealthier Americans bought the food of the poor for them, and spent the least money possible on rations.

Pellagra was most widespread in the southeastern US. Even after the abolition of slavery, this part of the country specialized in farming cotton, not food. Cotton was grown in a sharecropping regime where farmworkers lived on the estate, and paid the owner for housing, tools, seed, and even food. They had to take out “loans” and pay them back at cotton harvest. The many sharecroppers who were Black were targeted for even worse. Estate owners used their wealth to build a system Americans called Jim Crow: no schools that would teach the children of poor black families’ to read and write. They went on frequent “night riding” campaigns, shooting up black homes and setting them on fire simply to terrorize them. Jim Crow laws kept Blacks from voting to stop these raids. Thus while America was “promoting freedom” abroad, it was itself torn by ethnic persecution and a labour system often indistinguishable from slavery.

Jim Crow also explains one of the most bizarre moments in US history: why American estate owners kept growing more and more cotton even as its global prices plummeted. It didn’t matter if estates lost money selling cotton. They made it up by loan-sharking their workers—using the loan system to soak up every additional penny workers made doing odd jobs like tinkering, domestic work, and making clothes. Their estates were less a cotton production system and more a system for mining the other inhabitants of their region for everything they were worth. What mattered was filling up the land with cotton. That way, there was simply nowhere for anyone to grow food. In this economically stunted region with few stores, poor people had to go through estate owners to buy food. And once they did, they were trapped in debt.

Cotton wasn’t about selling an agricultural commodity. It was about keeping whole regions poor and under the personal control of local landlords. Given how often they conducted raids and lynchings, one could even call America’s cotton estate men warlords.

But even that isn’t the whole story. Where did the corn come from?

It came from further north in the United States: a broad, fertile zone between the Ohio River and the 100th parallel. This region is known variously as the Midwest, the Corn Belt, and America’s breadbasket. The Midwest got started early as an export centre, sending corn and salt pork down the Mississippi to feed the enslaved. Their captors bought rations mostly as a supplement to the food grown on estates to minimize operating costs. But after the end of slavery, these estates switched to the Jim Crow model: excluding food crops from the region. Without the formal tools of slavery, the wealthy white landed elite found the next best way to control people was hunger.

This is how “US agribusiness” got started. It wasn’t because of mechanization after World War II. It was long before that, with mass exports to supply America’s own slave regime. A long-distance food trade already existed. That is why the Beall’s corn degerminator was invented in the first place. This isn’t an instance of technology popping out of nowhere to ruin lives. It was created to help along an extractive regime that was already happening. As long as we’re busy bickering over whether technology is good or bad, we’re not focused on who is using it and what goals of theirs it promotes. And if I had to guess, that’s exactly how powerful people like it. They like when we think the problem is machines existing, rather than the people putting them to work.

Their estates were less a cotton production system and more a system for mining the other inhabitants of their region for everything they were worth.

This longstanding trade wasn’t just good for the southern aristocracy. Midwestern landowners got fabulously wealthy because their fellow Americans struggled with forced scarcity. Just one state, South Carolina, imported US$70-100 million worth of food per year at the peak of this period in 1917—the equivalent of US$1.4-2 billion today. 1900 to 1920 became known as the “Golden Age of Midwestern Agriculture”. These two decades made a huge impression on American pop culture. When Americans today say “farming used to be profitable,” they’re referring to this specific period. Farming was famously precarious both before and after this time. Midwestern grain farmers have spent the last century chasing this high. And on some level, they know exactly how it happened: a war-torn Europe and a South plunged into artificial scarcity. Both unable to feed themselves and forced to either shell out their scarce cash or starve.

America might have forgotten the specifics of what happened. Pellagra is embarrassing, and World War I is a calamity few wish to remember. But if you look at US foreign policy, it’s clear that its grain farmers still remember enough. They know hunger is good business.

Thanks to the failings of basic democratic institutions in the US, Midwestern grain estates have incredibly disproportionate influence in US politics. This has consequences for our foreign policy that can be seen in “food aid” programmes that mostly serve as crop dumping that serves three purposes. It alleviates food gluts at home, propping up crop prices in the United States. Crop dumping also undercuts farmers elsewhere in the world. This can start a vicious cycle of dependency on imports: the American grain farmer’s ultimate gold mine. And finally, it makes America’s farmers look important. It makes their wealth and political prestige look like it is earned through the hard work of farming, instead of what it is: thieved away from other farmers all around the world through back-room geopolitical dealings.

Cash crops and technology aren’t bad in and of themselves. In democratic environments, they can build wealth and well-being in farming areas. But in economies dominated by warlords and other malignant hustlers, everything is turned to the detriment of ordinary people. Cash crops, technology, even access to food and water become struggles used to keep people bound to power players. The United States is no exception. Our history of mass hunger at home, forgotten though it may be, is witness to that.

Continue Reading

Op-Eds

The Return of the Taliban: What Now for the Women of Afghanistan?

The American experiment in Afghanistan failed, but why should women and girls pay the price?

Published

on

The Return of the Taliban: What Now for the Women of Afghanistan?

There have been a lot of knee-jerk reactions – particularly from liberals – about the United States’ hasty withdrawal from Afghanistan. Those who oppose US military intervention in foreign lands say the withdrawal couldn’t have come sooner – that invading Afghanistan in 2001 after the 9/11 terror attacks on New York and Washington was a mistake and staying on in (“occupying”) the country was an even bigger mistake. They argue that US military intervention in Korea, Vietnam, Somalia and other places has been disastrous, and that these interventions reek of imperialism.

Well and good. But everyone who has something to say about the poorly planned US withdrawal from Afghanistan, including the Taliban and President Joe Biden, has failed to answer these questions: What would the women of Afghanistan have wanted? Why were they not consulted before the US president made the unilateral decision to pull out troops from Afghanistan? And what gives Biden and the all-male Pashtun-dominated Taliban leadership the right to make decisions on women’s behalf?

I was in Kabul in 2002, some three months after the US invaded the country and ousted the Taliban from the capital city. I spoke with many women there who told me that they were relieved that the Taliban had left because life under the misogynistic movement had become unbearable for women and girls. Girls were not allowed to have an education so girls’ schools had to be run secretly from homes. The Taliban were known for barbaric public executions and for flogging women who did not wear burqas or who were accused of adultery. Theirs was an austere, cruel rule where people were not even allowed to sing, dance, play music or watch movies.

Twenty years of war, beginning with the Russian invasion of Afghanistan in 1979 and the subsequent US-backed insurgency of the Mujahideen (mujahidun in Arabic—“those engaged in jihad”) in the 1980s (which later transformed into the Taliban movement), not to mention the US invasion of Afghanistan after the 9/11 terror attacks, had left Kabul’s physical infrastructure in ruins. Entire neighbourhoods had been reduced to rubble and no one quite remembered any more whose army had destroyed which building. The only buildings still left standing were the mosques and the Soviet-built apartment blocks housing civil servants. In 2002, Kabul Municipality had estimated that almost 40 per cent of the houses in the city had been destroyed in the previous fifteen years. Solid waste disposal barely met minimum standards, and running water and electricity were luxuries in most homes.

After the Taliban fled the capital and went underground, an estimated 3 million girls went back to school. At that time, the average Afghan child could expect only about 4 years of schooling. By 2019, this figure had risen to 10 years. Today, more than 13 per cent of adult women in Afghanistan have a secondary school education or higher. Women’s participation in the political sphere also increased dramatically; in 2019, nearly a third (27.2 per cent) of parliamentary seats were held by women.

No wonder women around the world were shocked and dismayed to see how easily Afghan women and girls were sacrificed and abandoned by the world’s leading powers. “My heart breaks for the women of Afghanistan. The world has failed them. History will write this,” tweeted the Iranian journalist and activist Masih Alinejad on 13 August 2021.

As Taliban fighters were gaining control of the capital Kabul on Sunday, 15 August 2021, an unnamed woman living in the city wrote the following in the Guardian:

As a woman, I feel I am the victim of this political war that men started. I felt like I can no longer laugh out loud, I can no longer listen to my favourite songs, I can no longer meet my friends in our favourite café, I can no longer wear my favourite yellow dress or pink lipstick. And I can no longer go to my job or finish the university degree that I worked for years to achieve.

There have been reports of Taliban fighters abducting and marrying young girls, and ordering women not to report to work. Afghan female journalists fear for their lives; many have gone into hiding. The sale of burqas has apparently skyrocketed.

The argument that women in other countries also suffer at the hands of men, and experience gender-based violence does not fly with many Afghan women who have been fighting for the rights of women for the last two decades. For one, there is no law in any country in the world, as far as I know, that denies women an education or bans them from working outside the home. Women in these countries may not yet be truly free, but at least they can rely on the law to protect them. All the gains Afghan women have made over the last two decades will now be lost. I do not for one second believe that the rebranded Taliban emerging in Afghanistan have become feminists overnight, despite their pro-women rhetoric at press conferences. Mahbouba Seraj, an Afghan women’s rights leader, told TRT World that what is happening in Afghanistan is “going to put the country two hundred years back.” “I am going to say to the whole world—shame on you!” she stated.

A series of failures 

That is not the first time the US has abandoned Afghanistan. After Russian forces withdrew from Afghanistan in 1989, the US pulled out as well, leaving the Mujahideen, which it had been funding, to its own devices. Yet, in 1979, when Russian forces entered Afghanistan, the US National Security Advisor Zbigniew Brzezinski had described the Mujahideen as “soldiers of God”, and told them, “Your cause is right and God is on your side.” The Mujahideen transformed into the Taliban, and imposed its severe rule on Afghans during the latter part of the 1990s.  It also became a den for terrorist organisations like Al Qaeda.  The US essentially created a monster that launched the 9/11 attacks 22 years later.

Afghanistan has had a long and turbulent history of conquests by foreign rulers, and has often been described as the “graveyard of empires”. But it has not always been anti-women. In 1919, King Amanullah Khan introduced a new constitution and pro-women reforms. The last monarch, Zahir Shah (1933-1973), also ensured that women’s rights were respected through various laws. But when Shah was overthrown in 1978, the Soviet Union installed a puppet leader. This gave rise to the anti-Soviet Mujahideen, who gained control of the country in the 1990s and eroded many of the rights women had been granted.

There have been reports of Taliban fighters abducting and marrying young girls, and ordering women not to report to work.

There are many parallels with Somalia, which also enjoyed Russian support under President Siad Barre. When the Soviets switched sides and began supporting Ethiopia’s Mengistu Haile Mariam, the US gained more influence, but it could not install democracy in a country that had descended into warlordism after Barre was ousted in 1991. After American soldiers were killed in Mogadishu during the country’s civil war in 1993, the US withdrew from Somalia completely. Conservative forces supported by some Arab countries filled the void. When a coalition of Islamic groups took over the capital in 2006, they were quickly ousted by US-backed Ethiopian forces. Al Shabaab was born. As in Afghanistan, the US had a hand in creating a murderous group that had little respect for women.

After the US invasion in 2001, instead of focusing on stabilising and rebuilding Afghanistan, President George Bush set his eyes on invading Iraq on the false pretext that the Iraqi dictator Saddam Hussein had links to Al Qaeda and was harbouring weapons of mass destruction. That war in 2003 cost the US government its reputation in many parts of the Muslim world, and turned the world’s attention away from Afghanistan. Bush will also be remembered for illegally renditioning and detaining Afghans and other nationals suspected of being terrorists at the US naval base in Guantanamo Bay.  This ill-advised move, which will forever remain a blot on his legacy, has been used as a radicalisation propaganda tool by groups such as the Islamic State in Syria (ISIS).

The international community is now sitting back and doing nothing, even as it is becoming increasingly evident that the world is witnessing a humanitarian catastrophe that will have severe political repercussions within the region and globally. The international community of nations, including the UN Security Council, cannot do anything except plead with the Taliban to not discontinue essential services, which is a tall order given that three-quarters of Afghanistan’s budget was funded by foreign (mostly Western) aid. The Taliban was allowed to take over the country without a fight. And all the UN Secretary-General could do was issue statements urging neighbouring countries to keep their borders open to the thousands of Afghans fleeing the country.

The mass exodus of Afghans, as witnessed at Kabul’s international airport, is a public relations disaster for the Taliban. It shows that not all Afghans welcome the Taliban’s return. As the poet Warsan Shire wrote about her homeland Somalia, “no one leaves home unless/home is the mouth of a shark”. Afghanistan has once again become a failed state.

The longest war 

The impact of the Taliban’s capture of the country is already being felt.  The exodus of Afghans is creating a refugee crisis like the one witnessed in 2015 during the civil war in Syria. The US and its NATO allies have essentially created a refugee crisis of their own making. This will likely generate anti-immigration and anti-Muslim sentiments in the US and Europe, and embolden racist right-wing groups. It is also possible that Afghanistan will become the site of a new type of Cold War, with Russia and China forming cynical alliances with the Taliban in order to destabilise the West and to exploit Afghanistan’s vast natural resources, which remain largely untapped. Girls’ education will be curtailed. No amount of reminding the Taliban that Prophet Mohammed’s wife Khadija was a successful businesswoman, and that his third wife Aisha played a major role in the Prophet’s political life will change their minds about women. Women and girls are looking at a bleak future as the Taliban impose punitive restrictions on them that even the expansionist Muslim Ottoman Empire did not dare enforce in its heyday. Afghanistan will become a medieval society where women remain voiceless and invisible.

The worst-case scenario – one that is just too horrific to contemplate – is that terrorist groups like the Islamic State in Iraq and Syria (ISIS) and Al Qaeda will find a foothold in Afghanistan, and unleash a global terror campaign from there, as did Osama bin Laden more than two decades ago.

As in Afghanistan, the US had a hand in creating a murderous group that had little respect for women.

The irony the US having invaded the country two decades before, ostensibly to get rid of Islamic terrorists, Biden has essentially handed over the country to the very group that had harboured terrorists like Osama bin Laden, the alleged mastermind of the 9/11 attacks. “President Joe Biden will go down in history, fairly or unfairly, as the president who presided over a humiliating final act in the American experiment in Afghanistan,” wrote David E. Sanger in the New York Times. (To be fair, it was not Biden who first opened the doors to the Taliban; President Donald Trump invited the Taliban to negotiations in Doha in 2018, which lent some legitimacy to a group that had previously been labelled as a terrorist organisation.)

Dubbed “America’s longest war”, the US military mission in Afghanistan has cost US taxpayers about US$2 trillion, one quarter of which has gone towards reconstruction and development, though critics have pointed out that the bulk of this money was used to train the Afghan military and police, and was not used for development projects. The military mission in Afghanistan has also come at a huge human cost; 3,500 soldiers and other personnel from 31 NATO troop-producing countries and 4,400 international contractors, humanitarian workers and journalists were killed in Afghanistan between 2001 and 2020.  Thousands of Afghan lives have also been lost. The United Nations Assistance Mission in Afghanistan estimates that at least 100,000 Afghans have been killed or wounded since 2009.

Was the US and NATO intervention in Afghanistan worth it?  Should the US and NATO have stayed a bit longer until the country had well-functioning and well-resourced institutions and until they were sure that the Taliban had been completely routed out? I think so, because I believe that ousting the Taliban was as ethically correct as eliminating ISIS and defeating the German Nazis. The problem in Afghanistan is that the Taliban were never defeated; they simply went underground.

Women and girls are looking at a bleak future as the Taliban impose punitive restrictions on them that even the expansionist Muslim Ottoman Empire did not dare enforce in its heyday.

There is no doubt that the “liberation” or “occupation” of Afghanistan by the US-dominated NATO mission in Afghanistan brought about some tangible benefits, including rebuilt and new infrastructure,  the growth of a vibrant civil society and more opportunities for women. But the US’s support of Western-backed Afghan governments that are generally viewed as corrupt by the majority of Afghans may have handed the Taliban the legitimacy and support they seem to be enjoying among the country’s largely poor rural population, just as installing highly corrupt Western-backed governments in Somalia in the last fifteen years gave Al Shabaab more ammunition to carry out its violent campaign. The Taliban is also recognised by some neighbouring countries, notably Pakistan, which is believed to be one of its funders, and which receives considerable military and other support from the US. This raises questions about why the US is aiding a country that is working against its interests in another. This Taliban-Pakistan alliance will no doubt be watched closely by Pakistan’s rival India.

Afghanistan, unfortunately, is a sad reminder of why no amount of investment in infrastructure and other “development” projects can fix something that has been fundamentally broken in a country. Like Iraq after the 2003 US-led invasion, it may fragment along tribal or sectarian lines and revert to a civil war situation. Under the Taliban “government”, Afghanistan may become a joyless place where people are not allowed to listen to music, dance or watch movies – where enforcement of a distorted interpretation of Islam casts a dark shadow on the rest of the Muslim world. And Afghan women and girls will once again pay the heaviest price.

Continue Reading

Op-Eds

The Kenyan Court of Appeal’s BBI Judgment: Thirsting for Sunlight

At its heart, the BBI Judgment is about power, and the judges in the majority believe that the constitution acts as a barrier against the concentration of power, and as a channel for its dispersal.

Published

on

The Kenyan Court of Appeal’s BBI Judgment: Thirsting for Sunlight

There is a story about how, for the longest time, the poetic perfection of The Iliad confounded scholars. How could Homer both be the first of the epic bards, and the most accomplished? Foundational works are tentative, exploratory, sometimes stumbling, searching for an assurance that they are doomed to never realise. That privilege is reserved for later works, which build upon the foundation and reach the pinnacle.

The mystery was ultimately resolved when it was deduced that Homer was not the first – or even (in all probability) one – person, but part of an entire oral tradition of epic composition (a lesson, perhaps, that whether artist, judge, or lawyer, acts of creation are always collaborative). Yet the point remains: when we consider work that has taken on the burden of a beginning, we should hold it to the standards of a beginning. Not every question will be answered, not every resolution will satisfy, not every path be taken to its logical destination. But without a beginning, there will be nothing to take forward.

I’d like to think of the BBI Judgment in the words of Christopher Okigbo’s poem, Siren Limits: “For he was a shrub among the poplars/ Needing more roots/ More sap to grow to sunlight/ Thirsting for sunlight. . . .” In the years to come, constitutional jurisprudence may put down stronger roots, and more sap may flow that takes it to sunlight, but here is where the beginning is.

In that spirit, in the first section of this article, I raise a couple of questions that future courts may be called upon to answer. These are in addition to some of the issues discussed in the previous posts, which have also been left open by the judgment(s) (constitutional statutes, referendum questions, identifying the exact elements of the basic structure, etc.)

Making the constitution too rigid?

A stand-out feature of both the High Court and the Court of Appeal judgments has been that, for the first time in basic structure history, the doctrine has been held not to constitute a bar on amendments, but to require the replication of the Constitution’s founding conditions. This, it is argued, provides a safeguard against a possible juristocracy, where the courts stand as barriers to the people’s will, thereby leaving a revolution or a coup as the only options.

To this, the counter-argument – mentioned in Judge Sichale’s dissenting opinion – is that the judiciary nonetheless remains a gatekeeper, as it will decide when a proposed amendment violates the basic structure and therefore needs to go through the rigorous four-step “re-founding” procedure. This becomes problematic, because if Article 257 is meant to empower the common person – Wanjiku – to initiate a constitutional amendment process, then placing the constitutional courts as a set of Damocles’ swords that might at any point fall upon that process, cut it short, and demand its replacement by the far more onerous re-founding procedure, can hardly be called empowerment. After all, is it fair to expect Wanjiku to approach the constitutional court every time, to check in advance, whether Article 257 should apply to a proposed amendment, or whether preparations should commence for nationwide civic education, a constituent assembly, and so on?

I suspect that it is for this reason that more than one judge in the majority did try to define the basic structure with a degree of specificity, gesturing – in particular – to the ten thematic areas set out in Article 255(1) of the Constitution. Ultimately, however, the Court of Appeal judgments could not reach a consensus on this point. The upshot of this is that it is likely that the Kenyan courts – more than courts in other jurisdictions – will be faced with litigation that will specifically require them to identify what constitutes the basic structure.

Is it fair to expect Wanjiku to approach the constitutional court every time, to check in advance, whether Article 257 should apply to a proposed amendment?

That said, however, I believe that the concern is somewhat overstated. One thing that comes through all of the Court of Appeal judgments is a clear sense that constitutional amendment is a serious endeavour. The stakes – permanent alteration of the Constitution – are high. In such a circumstance, is it that disproportionate to have the constitutional courts involved at the stage of vetting the amendment, simply on the question of which procedural channel it should proceed into? After all, there are jurisdictions where pre-legislative scrutiny for constitutional compliance – whether by a constitutional office such as that of the Attorney-General, or even by a court – exists.

And one can easily imagine how the Kenyan courts can develop norms to minimise the disruption that this will cause. For example, the point at which one million signatures are collected and verified could become the trigger point for judicial examination of whether the initiators should proceed to the next steps under Article 257, or whether the four-step re-founding process applies. Note that this need not be an automatic trigger: the requirement that someone has to challenge the process can remain, but the courts can develop norms that will expedite such hearings, discourage appeals on the specific question of which procedural channel a particular amendment should go down, and so on. The judiciary’s role, then, would remain a limited one: simply to adjudicate whether the proposed amendments are of such import that they need the deeper public participation envisaged in the four-step re-founding process, or whether Article 257 will do. The task will obviously be a challenging one, but not one that is beyond the remit of what courts normally do.

De-politicising politics, and the perils of vox populi, vox Dei

There is an argument that both through the basic structure doctrine, and through its interpretation of Article 257, the court evinces a distrust of politicians and political processes, and a (consequent) valorisation of litigation and the judicial process; that the effect of its judgment is to make the constitution too rigid, and effectively impossible to amend; and that, if we look at Article 257 closely, it was always meant to be a joint effort between politicians and the people, because the threshold barriers that it places – one million signatures and so on – require the institutional backing of politicians to start with. It is further argued that this is not necessarily a bad thing, as (a) even historically, the 2010 Constitution of Kenya was the product of political compromise, and not the outcome of pure public participation that the High Court’s judgment made it out to be; and (b) there is no warrant to demonise politicians and politics as tainted or compromised, or at least, relatively more tainted and compromised than litigation and adjudication.

To this, there is an added concern: judgments that claim to speak in the name of the People invariably end up flattening a plural and diverse society, with plural and diverse interests, into a single mass with a single desire – which only the court is in a position to interpret and ventriloquize. This, then, turns into the exact top-down imposition of norms and values that the doctrine of public participation is meant to forestall.

While I believe that the Court of Appeal did not make either of the two mistakes indicated above, I do think that the argument is a powerful one, and requires the judiciary to exercise consistent vigilance (primarily upon itself). A reading of the High Court and Court of Appeal judgments, to my mind, makes it clear that the Constitution Amendment Bill of 2020 was executive-driven (indeed, it would be a bold person who would go against the unanimous finding of twelve judges, across two courts, on this).

But it is easy to imagine messier and less clear-cut situations. What happens if, for instance, an amendment proposal emerges from a set of people, and then a political party or a charismatic politician takes it up, uses their platform to amplify it, and ultimately helps to push it over the one million signature mark? A point was made repeatedly that politicians are part of The People; now, while the distinction between the two was particularly clear in the BBI case, what happens when it is not so, and when it becomes much more difficult to definitively say, “this proposed Amendment came from the political elite, and not from the People?” Is the answer judicial deference? But if it is deference, wouldn’t it simply allow powerful politicians to use proxies, as long as they did it more cleverly and subtly than the protagonists of the BBI?

The difficulty, I believe, lies in the fact that when you say that Article 257 is a provision for The People, you run into a host of very difficult challenges about who are the People, who are not the People, when is it that the People are acting, and so on. The intuitive point that the High Court and the Court of Appeal were getting at is a clear and powerful one: Article 257 envisages an active citizenry, one that engages with issues and generates proposals for amendments after internal social debate – and not a passive citizenry, that votes “Yes” or “No” to a binary choice placed before it by a set of powerful politicians. And while I believe that that is the correct reading of Article 257, it places courts between the Scylla of short-circuiting even legitimate politics, and the Charybdis of stripping Article 257 of its unique, public-facing character.

I think that the only possible answer to this is continuing judicial good sense. Given the issues it had to resolve, I think that it is inevitable – as pointed out above – that the BBI Judgment would leave some issues hanging. But for me this is not a weakness of the judgment, or a reason to castigate it: I think that there are certain problems that simply can’t be resolved in advance, and need courts to “make the path by walking.”

The grammar of power

Stripped down to the essentials, constitutions are about power: who holds it, who can exercise it, who can be stopped from wielding it; when, how, and by whom. Constitutions are also full of gaps, of silences unintended or strategic, of ambiguities planned and unplanned. Interpretation, thus, is often about the balance of power: resolving the gaps, silences, and ambiguities in ways that alter power relations, place – or lift – constraints upon the power that institutional actors have, and how they can deploy it. When Robert Cover writes, therefore, that “legal interpretation takes place in a field of pain and death,” we can slightly modify it to say that “constitutional interpretation takes place in a field of power.”

Article 257 envisages an active citizenry, one that engages with issues and generates proposals for amendments after internal social debate.

At its heart, I think that the BBI Judgment is about power. The issues that span a total of 1089 pages are united by one common theme: the judges in the majority believe that the constitution acts as a barrier against the concentration of power, and as a channel for its dispersal. Why require referendum questions to be grouped together by unity of content? Because doing so will constrain the power of institutional actors to force unpalatable choices upon people in all-or-nothing referenda. Why interpret Article 257 to exclude public office holders from being initiators? Because to hold otherwise would divest power vested in the public, and instead, place it in the hands of a political executive claiming to directly “speak for the people”. Why insist on contextual public participation for the Article 257 process? Because without granular participation, even a “people-driven process” will not be free from centres of power that dominate the conversation. Why insist upon fixing the IEBC quorum at five, and for a legislative framework to conduct referenda? Because independent Fourth Branch Institutions play a vital role in checking executive impunity on a day-to-day basis, in a way that courts often cannot. And lastly, why the basic structure, why this form of the basic structure? Because the power to re-constitute the constitution is the most consequential of all powers: institutional actors should not have it, but nor should the courts have the power to stop it. Thus, the articulation of the primary constituent power, and its exercise through – primarily – procedural steps.

And I think that it is here that we find the most important contribution of the High Court and the Court of Appeal judgments to global constitutional jurisprudence. Reams have been written by now about the “Imperial Presidency”, and the slow – but inevitable – shift, across the world, towards concentration of political power rather than its dispersal. Examining the High Court and Court of Appeal judgments through the lens of power, its structures and its forms, reveals a judiciary that is working with constitutional text and context to combat the institutionalisation and centralisation of power, to prevent the constitution from being used as the vehicle of such a project, and – through interpretive method – to try and future-proof it from ever being so used. It is too early to know if the effort will succeed. The sap and the roots are now the responsibility of future judgments, if sunlight is to be reached, and not just thirsted for.

The hydra and the sword: parting thoughts

There are moments in one’s life when you can tell someone, with utter clarity, that “I was there when. . . .” For my part, I will always remember where I was, and what I was doing, when, during oral arguments before the Court of Appeal, I heard Dr Muthomi Thiankolu’s ten-minute summary of Kenyan constitutional history through the allegory of the Hydra of Lerna. It ended thus:

If you drop the sword, My Lords and My Ladies, we have been there before. When the courts drop the sword of the Constitution, we had torture chambers. We had detentions without trial. We had sedition laws. It may sound, My Lord, that I am exaggerating, but the whole thing began in small bits.

I remember it because by the end, I was almost in tears. It took me back to a moment, more than four years ago, when I stood in another court and heard a lawyer channel Justice William O. Douglas to tell the bench: “As nightfall does not come at once, neither does oppression. In both instances, there is a twilight when everything remains seemingly unchanged. And it is in such twilight that we all must be most aware of change in the air – however slight – lest we become unwitting victims of the darkness.”

The judges in the majority believe that the constitution acts as a barrier against the concentration of power, and as a channel for its dispersal.

The chronicle of events that followed those words does not make for pleasant reading. But as I heard Dr Thiankolu speak of an era of executive impunity – an impunity enabled by a judiciary (with a few exceptions) that saw itself as an extended arm of the executive – what struck me was not how familiar (detentions without trial!) his examples sounded, but that he spoke of them in the past tense. And on the 20th of August, as judge after judge in the Court of Appeal read out their pronouncement, it seemed that an exclamation point was being added to those arguments: the past really had become a foreign country.

One person’s past is invariably another person’s present. But the present sometimes overwhelms us with its heaviness. It creates an illusion of permanence that forecloses the possibility of imagining a future where this present has become the past. We cannot bootstrap ourselves out of such moments: we need someone to show us the way, or to show us, at least, that a way exists.

And so, perhaps the great – and intangible – gift that the Kenyan courts have given to those stuck in an interminable present, is a simple reminder: it needn’t always be like this.

Continue Reading

Trending