We’ve built our cities to be vulnerable to – and exacerbate – major weather events such as the one we saw in Auckland on Friday. While almost no city in the world could fully escape the effects of four months’ worth of rain in 24 hours, there are many things that could have been done to avoid some of the worst impacts.
Buildings, streets and car parks are all impermeable surfaces. When it rains, the water rushes off these surfaces and into gutters. From the gutters, the water drains into a stormwater catch basin, through the stormwater network, and into streams and the sea.
Herein lies the problem. The more we build, the more stormwater we need to drain. Every new building or road replaces the planet’s natural stormwater system: plants and soil, and channels for runoff.
The network of pipes can only hold so much water before it is fully inundated and begins to flood. While every block typically has a catch basin or two, they can easily clog with leaves and other debris even before a storm hits. Add an abnormal amount of rainfall, and neighbourhood flooding is nearly guaranteed.
Even if the way we’ve built our cities and the stormwater system could keep up with big storm events – to be clear, they cannot – the network of basins and pipes is aging. With age, the system’s capacity to capture stormwater significantly declines.
Modernising all the stormwater infrastructure will take decades and billions of dollars. This is what the contested Three Waters project is really all about, and we need to quickly get past the political sideshows it has inspired.
While the system ages and suffers from reduced capacity, it is also more prone to failure. It’s not uncommon to see news that stormwater has mixed with raw sewage. This is gross just to think about, but it gets worse.
Because stormwater is not treated, when it gets contaminated that dirty mixture drains into the water around our beaches. It’s why, after a storm, the SafeSwim map is covered in red “high risk” markers.
From Friday’s rain event, some of the most shocking images were of cars and buses trying to wade through flooded roads and busways. The irony is that the roads themselves are a significant contributor to the flooding.
With thousands of miles of sealed roads around Auckland, there was simply nowhere for the water to go. Roads act like channels, funnelling stormwater. With a huge rain event, streets quickly turn into rivers.
Setting aside the concoction of stormwater and raw sewage flowing down streets (which we more politely call a “combined sewer overflow”), and the impact on homes, businesses and beaches, flood waters also present a massive risk to people in cars.
It’s nearly impossible to tell how deep or fast surface flooding is, so people get into danger.
There is a better way to design our built environment. In the early 2000s, Chinese architect Kongjian Yu created the concept of the “sponge city”. It’s a relatively simple idea, but a big departure from the way we typically build infrastructure.
The concept incorporates green roofs, rain gardens and permeable pavements to absorb and filter water. Better catch systems hold rainwater where possible and reuse it. More green space and trees are also incorporated into street and neighbourhood designs.
Within the sponge city concept is a way to mitigate flooding using “water sensitive urban design”. With this approach, we create spaces that better manage flooding through systems that mimic the natural water cycle.
This can also include floodable infrastructure and parks to take the pressure off more vulnerable parts of the city. There are already examples of these design principles in Auckland, but they are far too limited to eliminate the impact of major storms.
The sponge city concept, and ideas about letting nature handle stormwater, don’t have to be extravagant or expensive. They can be as simple as planting more trees and greenery, using less pavement for driveways or more porous cement for car parks.
In a way, we should do less building and let nature do what it was meant to do.
The stark reality is the flooding we experienced this week, and arguably the storm itself, are of our own making. We’ve built a supercity covered in impervious surfaces, expanded the built environment across sensitive (and flood-prone) areas, and created massive greenhouse gas emissions destabilising the climate.
Climate change will make future storms more intense and more frequent. Do we cross our fingers and hope the rain goes away? Do we invest billions in bigger pipes that will inevitably fail to control flooding and still pollute sensitive waters? Or do we get smarter and more proactive about designing our cities?
If we don’t want to repeat the week’s events, there’s only one real option.
Timothy Welch does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The extraordinary flood event Auckland experienced on the night of January 27, the eve of the city’s anniversary weekend, was caused by rainfall that was literally off the chart.
Over 24 hours, 249mm of rain fell – well above the previous record of 161.8mm. A state of emergency was declared late in the evening.
It has taken a terrible toll on Aucklanders, with two people reported dead and two more missing. Damage to houses, cars, roads and infrastructure will run into many millions of dollars.
Watching the images roll into social media on Friday evening, I thought to myself that I’ve seen these kinds of pictures before. But usually they’re from North America or Asia, or maybe Europe. But this was New Zealand’s largest city. Nowhere is safe from extreme weather these days.
The torrential rain came from a storm in the north Tasman Sea linked to a source of moisture from the tropics. This is what meteorologists call an “atmospheric river”.
The storm was quite slow-moving because it was cradled to the south by a huge anticyclone (a high) that stopped it moving quickly across the country.
Embedded in the main band of rain, severe thunderstorms developed in the unstable air over the Auckland region. These delivered the heaviest rain falls, with MetService figures showing Auckland Airport received its average monthly rain for January in less than hour.
The type of storm which brought the mayhem was not especially remarkable, however. Plenty of similar storms have passed through Auckland. But, as the climate continues to warm, the amount of water vapour in the air increases.
I am confident climate change contributed significantly to the incredible volume of rain that fell so quickly in Auckland this time.
There will be careful analysis of historical records and many simulations with climate models to nail down the return period of this flood (surely in the hundreds of years at least, in terms of our past climate).
How much climate change contributed to the rainfall total will be part of those calculations. But it is obvious to me this event is exactly what we expect as a result of climate change.
One degree of warming in the air translates, on average, to about 7% more water vapour in that air. The globe and New Zealand have experienced a bit over a degree of warming in the past century, and we have measured the increasing water vapour content.
But when a storm comes along, it can translate to much more than a 7% increase in rainfall. Air “converges” (is drawn in) near the Earth’s surface into a storm system. So all that moister air is brought together, then “wrung out” to deliver the rain.
A severe thunderstorm is the same thing on a smaller scale. Air is sucked in at ground level, lofted up and cooled quickly, losing much of its moisture in the process.
While the atmosphere now holds 7% more water vapour, this convergence of air masses means the rain bursts can be 10% or even 20% heavier.
The longer we continue to warm the climate, the heavier the storm rainfalls will get.
Given what we have already seen, how do we adapt? Flooding happens when stormwater can’t drain away fast enough. So what we need are bigger drains, larger stormwater pipes and stormwater systems that can deal with such extremes.
The country’s stormwater drain system was designed for the climate we used to have – 50 or more years ago. What we need is a stormwater system designed for the climate we have now, and the one we’ll have in 50 years from now.
Another part of the response can be a “softening” of the urban environment. Tar-seal and concrete surfaces force water to stay at the surface, to pool and flow.
If we can re-expose some of the streams that have been diverted into culverts, re-establish a few wetlands among the built areas, we can create a more spongy surface environment more naturally able to cope with heavy rainfall. These are the responses we need to be thinking about and taking action on now.
We also need to stop burning fossil fuels and get global emissions of carbon dioxide and other greenhouse gases down as fast as we can. New Zealand has an emissions reduction plan – we need to see it having an effect from this year. And every country must follow suit.
As I said at the start, no community is immune from these extremes and we must all work together.
James Renwick receives funding from MBIE to study climate variability and change, and has in the past received funding from regional government to study climate change effects. He is a Commissioner with the NZ Climate Change Commission.
In the case of the five Black, former Memphis police officers accused of murder in the beating death of Tyre Nichols, justice has moved quickly.
In fewer than 30 days after Nichols’ Jan. 10, 2023 death, the former officers were charged with second-degree murder, assault, kidnapping, official misconduct and official oppression.
The Memphis Police Department released video footage of the officers’ encounter with Nichols on Jan. 27, 2023. And some who’ve seen the video, which includes footage captured by body-worn cameras, cameras mounted on dashboards of police vehicles and security cameras on utility poles in the vicinity, have described it as “horrific.”
Before the video was released Memphis Police Chief Cerelyn Davis told CNN: “You are going to see acts that defy humanity.”
In recent years, as national outrage over the systemic racism within U.S. law enforcement has grown, The Conversation U.S. has published several articles on police brutality, race and the national outrage over systemic racism within the U.S. criminal justice system.
Media Studies Professor Sandra Ristovska examines the use of video as evidence in state and federal courts in the U.S. and writes about the Rodney King and George Floyd cases where jurors interpreted video evidence differently.
In the King case, the four Los Angeles police officers were acquitted of charges of assault and excessive use of force as the jury believed the video showed a justified response to King’s allegedly frightening actions.
Lead prosecutor Terry White ended his closing arguments by asking the jury: “Now who do you believe, the defendants or your own eyes?”
In the Floyd case, jurors believed their own eyes and convicted Derek Chauvin for the murder of Floyd.
As Ristovska explains, bystander, bodycam and dashcam videos of policing can be powerful forms of evidence.
“Yet judges, attorneys and jurors may see and treat video in varied ways that can lead to inconsistent renderings of justice,” she writes.
As historian Clare Corbould explains, police violence that disproportionately targets African Americans long predates portable video cameras.
Where Black Africans were once enslaved to provide cheap labor, Corbould writes, they are now policed, charged, indicted and incarcerated at staggering rates.
“As many have noted since [George] Floyd’s murder, the origins of U.S. policing lie in the control of supposedly disorderly populations,” Corbould writes, “whether of enslaved people or, after the end of slavery, an impoverished class of laborers including Black people and immigrants.”
In their peer-reviewed study of data on 235 U.S. city police departments from 2000 to 2016, Thaddeus L. Johnson and Natasha N. Johnson found that police forces requiring at least a two-year college degree for employment are less likely to employ officers who engage in actions that cause the deaths of Black and unarmed citizens.
As they explain, “Our results demonstrated that college minimums are associated with as much as three times lower rates of police-related fatalities involving Black people than police forces without a college degree requirement.”
Their findings further suggest that the impact of a more educated police force may emerge during only the most dangerous encounters that often precede the use of weapons.
More research needs to be done but they conclude that police agencies trying to reduce fatal confrontations should consider ways to recruit college-degreed applicants while at the same time support college attendance among current officers.
Editor’s note: This story is a roundup of articles from The Conversation’s archives.
The U.S. Food and Drug Administration’s key science advisory panel, the Vaccines and Related Biological Products Advisory Committee, met on Jan. 26, 2023, to chart a path forward for COVID-19 vaccine policy. During the all-day meeting, the 21-member committee discussed an array of weighty issues including the efficacy of existing vaccines, the composition of future vaccine strains and the need to match them to the circulating variants of SARS-CoV-2, the possibility of moving to an annual-shot model, the potential seasonality of the virus and much more.
But the key question at hand, and the only formal question that was voted on, following a proposal from the FDA earlier in the week, had to do with how to simplify the path to getting people vaccinated.
The Conversation asked immunologist Matthew Woodruff, who has been on the front lines of studying immune responses to COVID-19 since the early days of the pandemic, to walk us through the big questions of the day and what they mean for future COVID-19 vaccine strategies.
The question put before the committee for a vote was whether to move to one COVID-19 vaccine consisting of a single composition for all people – whether currently vaccinated or not – and away from the current model that includes one formulation given as a primary series and a separate formulation administered as a booster. Importantly, approved formulations could come from any number of vaccine manufacturers, not just those that have currently authorized vaccines.
The U.S. Centers for Disease Control and Prevention currently requires that the primary series of shots, or the first two doses of the vaccine that a patient receives, consist of the first generation of vaccine against the original strain of SARS-CoV-2, known as the “Wuhan” strain of the virus. These shots are given weeks apart, followed months later by a booster shot that was updated in August 2022 to contain a bivalent formulation of vaccine that targets both the original viral strain and newer subvariants of omicron.
The committee’s endorsement simplifies those recommendations. In a 21-to-0 vote, the advisory board recommended fully replacing, or “harmonizing,” the original formulation of the vaccine with a single shot that would consist of – at least for now – the current bivalent vaccine.
In doing so, it has signaled its belief that these new second-generation vaccines are an upgrade over their predecessors in protecting from infection and severe illness at this point in the pandemic.
For now, the single shot will be bivalent. But this may not always be the case.
There was a general agreement that the current bivalent shot is preferable to the original vaccine targeted at the Wuhan strain of the virus by itself. But committee members debated whether that original Wuhan vaccine strain should continue to be a part of updated vaccine formulations.
There is no current data comparing a monovalent, or single-strain, vaccine that targets omicron and its subvariants against the current bivalent shot. As a result, it’s unclear how a monovalent shot against recent omicron subvariants would perform in comparison to the bivalent version.
A main reason for the debate over monovalent versus bivalent – or, for that matter, trivalent or tetravalent – vaccines is a lack of understanding around how best to sharpen an immune response to a slightly altered threat. This has long been a debate surrounding annual influenza vaccination strategies, where studies have shown that the immune “memory” that forms in response to a prior vaccine can actively repress a robust immune response to the next.
This phenomenon of immune imprinting, originally coined in 1960 as “original antigenic sin,” has been a topic of debate both within the advisory committee and within the broader immunological community.
Although innovative strategies are being developed to overcome potential problems with routinely updated vaccines, they are not yet ready to be tested in humans. In the meantime, it is unclear how bivalent versus monovalent vaccine choices might alter this phenomenon, and it is very clear that more study is needed.
While a significant portion of the discussion focused on the mRNA vaccine platform used by both Pfizer and Moderna, several committee members emphasized the need for new technologies that could provide broader immunological protection. Dr. Pamela McInnes, a now-retired longtime deputy director of the National Center for Advancing Translational Sciences, highlighted this point, saying, “I would make a plea for ongoing research on broader protection, maybe different platforms, maybe a different approach.”
A good deal of attention was also directed toward Novavax, a protein-based formulation that relies on a more traditional approach to vaccination than the mRNA-based vaccines. Although the Novavax vaccine has been authorized by the FDA for use since July 2022, it has received much less national attention – largely because of its latecomer status. Nonetheless, Novavax has boasted efficacy rates on par with its mRNA cousins, with good safety profiles and less demanding long-term storage requirements than the mRNA shots.
By simplifying the vaccine schedule to include only a single vaccine formulation, the committee reasoned, it might be easier for competing vaccination platforms to break into the market. In other words, newer vaccine contenders would not have to rely on patients’ having already received their primary series before using their products. Companies seemed ready to take advantage of that future flexibility, with researchers from Pfizer, Moderna and Novavax all revealing their companies’ exploration of a hybrid COVID-19 and flu shot at various stages of clinical trials and testing.
Not necessarily. Currently, the influenza vaccine is decided by committee through the World Health Organization. Because of its seasonal nature, the strains to be included in each season’s flu vaccine for the Southern and Northern hemispheres, with their opposing winters, are selected independently. The Northern Hemisphere’s selection is made in February for the following winter based on a vast network of flu monitoring stations around the globe.
Although there was broad consensus among panelists that the shots against SARS-CoV-2 should be updated regularly to more closely match the most current circulating viral strain, there was less agreement on how frequent that would be.
For instance, rapidly mutating strains of the virus in both summer and winter surges might necessitate two updated shots a year instead of just one. As Dr. Eric Rubin, an infectious disease expert from the Harvard T.H. Chan School of Public Health, noted, “It’s hard to say that it’s going to be annual at this point.”
Matthew Woodruff receives funding from the National Institute of Health and the US Department of Defense to support his academic research.
After a decade, the federal government has reached an agreement to settle a class action lawsuit that included 325 First Nations across Canada. The class action was initiated by the Tk'emlúps te Secwépemc and shíshálh Nation in 2012. It was concerned with, among other issues, the loss of language and culture through Residential Schools. The settlement, worth $2.8 billion, includes support for cultural revitalization with focus on heritage, wellness and languages.
Efforts toward cultural revitalization will be funded by the $50 million Day Scholars Revitalization Fund. An important aspect of the fund will be the central role Indigenous Peoples will have in managing and guiding the process of supporting the cultural revitalization.
This settlement, just as the Indian Day School Settlement and the Indian Residential School Settlement before it, focuses on the justice necessary to address physical and emotional harms, and the long term impacts that they had for Indigenous communities and their national, cultural and traditional identities.
These traumatic impacts were deliberately put upon Indigenous Peoples through focus on the most vulnerable members of a community — their children. Over generations, many Indigenous children and youth who were attending these schools lost their language, culture and thousands lost their lives. The trauma of those experiences may be too horrific to recount. The intergenerational trauma experienced by the communities affected by these schools were also traumatic and constitute genocide.
A recurrent theme in the narratives of survivors is how Indigenous identities have been adversely affected, and principal among those aspects are Indigenous languages. Frequently regarded as one of the central components of Indigenous cultural identity, language revitalization has become of paramount importance.
The Truth and Reconciliation Commission of Canada’s (TRC) Calls to Action contain a number of imperatives related to languages. Call to Action 14 identifies Indigenous languages as “a fundamental and valued element of Canadian culture and society.” The reasons behind this are not difficult to understand: language allows humans to communicate ideas and is one of the pillars that support a people’s culture, traditions and history.
The importance of Indigenous languages is not just reflected in the special cultural and national features that they represent for Indigenous Peoples. They also are the optimum way to represent Indigenous knowledge, heritage and consciousness — such manifestations are undermined by the use of non-Indigenous languages.
The Day Scholars Revitalization Fund represents an important opportunity for those involved in the class action. First and foremost is the issue of agency. Responsibility for developing and employing a plan of action to utilize the funds rests with Indigenous Peoples.
The issue of agency is essential given the history of unjust government control over matters that affect Indigenous communities. Indigenous people must have an adequate voice, influence and control in regard to issues, initiatives and policy that affect them, their communities and their territories. As is frequently proclaimed by Indigenous Peoples: Nothing about us without us!
There are a number of ways that Indigenous communities can support the revitalization of their languages. The fundamental starting point is best summed up by the words of then chief commissioner of the TRC, Murray Sinclair: “Education got us into this mess and education will get us out.”
Canada has a rich and diverse history of Indigenous languages. However, most Indigenous children and youth, whether in public or on-reserve schools, are still educated in English and French.
There are however some encouraging developments in some Indigenous communities. In the far north, efforts have been made to ensure that Inuktut is the principal language of instruction in some Inuit schools. In Manitoba, some school divisions have created opportunities for First Nations languages such as Anishinaabemowin to be featured in classroom programming.
Partnerships between Indigenous communities and their respective schools need to be established to support the sorts of institutional transformations necessary to support curricular development, classroom resources and recruitment of qualified teachers.
These transformations require the voice, influence and control of Indigenous Peoples, and efforts should be marshalled to support such participation. Indigenous communities have worked hard to establish such partnerships. In the community of Kahnawa:ke, schools such as Karonhianónhnha Tsi Ionterihwaienstáhkhwa employ an immersion programme to sustain the Kanien’keha language.
Educational programming is crucial to revitalizing Indigenous languages, but it’s not the only piece of this puzzle. Community conditions outside of the school in which children and youth have opportunities to speak the language are also essential.
Communities need to develop strategies that provide improved opportunities for young people to learn and retain their language. Children and youth should be encouraged to use Indigenous languages outside of school as well through community laws, commerce and media. Such initiatives require the commitment of community members and the support of the Day Scholars Revitalization Fund may be well suited for this purpose.
Frank Deer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
To the delight of investors across the cryptosphere, the price of bitcoin (BTC) has rallied over 53% since its low of US$15,476 (£12,519) in November. Now trading around US$23,000, there’s much talk that the bottom has finally been reached for the leading cryptocurrency after a year of painful decline – in November 2021, the price peaked at almost US$70,000.
If so, it’s not only good news for bitcoin but the whole market in cryptocurrencies, since the others broadly move in line with the leader. So is crypto back in business?
The past is littered with various periods of market turmoil, from the global financial crisis of 2007-09 to the COVID-19 collapse in 2020. But neither of these is a particularly good comparison for our purposes because they both saw sharp drops and recoveries, as opposed to the slow unwinding of bitcoin. A better comparison would be the dotcom bubble burst in 2000-02, which you can see in the chart below (the Nasdaq is the index that tracks all tech stocks).
Nasdaq 100 index 1995-2005
Look at the bitcoin chart since it peaked in November 2021 and the price action looks fairly similar:
Bitcoin bear market price chart 2021-23
Both charts show that bear markets go through various periods where prices rise but don’t reach the same level as the previous peak – known as “lower highs”. If bitcoin is following a similar trajectory to the early 2000s Nasdaq, it would make sense that the current price will be another lower high and that it will be followed by another lower low.
This is partly because like the 2000s Nasdaq, bitcoin seems to be following a pattern known as an Elliott Wave. Named after the renowned American stock market analyst Ralph Nelson Elliott, this essentially argues that during a bear phase, investors shift between different emotional states of disappointment and hope, before they finally despair and decide the market will never turn in their favour. This is a final wave of heavy selling known as capitulation.
You can see this idea on the chart below, where bitcoin is the green and red line and Z is the potential capitulation point at around US$13,000 (click on the chart to make it bigger). The black line is the path that the Nasdaq took in the early 2000s. The blue pointing finger above that line is potentially the equivalent place to where the bitcoin price is now.
Bitcoin now vs Nasdaq in the early 2000s
The one other thing to note on the chart is the wavy line that’s moving horizontally along the bottom. This is the stochRSI or stochastic relative strength index, which is an indication of when the asset looks overbought (when the line is peaking) or oversold (when it’s bottoming).
A sign of a coming shift is when the stochRSI moves in the opposite direction to where the price is heading: so now the stochRSI is coming down but the price has held up around US$23,000. This too suggests a fall could be imminent.
Within markets, there is often a game that investors from institutions such as banks and hedge funds play with amateur (retail) investors. The aim is to transfer retail investors’ wealth to these institutions.
This is particularly easy in an unregulated market like bitcoin, because it is easier for institutions to manipulate prices. They can also talk up (or talk down) prices to stir up retail investors’ emotions, and get them to buy at the top and sell at the bottom. This “traps” the irrational investors who buy at higher prices, transferring wealth by giving the institutions an opportunity to convert their holdings into cash.
It therefore makes sense to compare how the retail and institutional investors have been behaving lately. The following charts compare those crypto wallet addresses that hold 1 BTC or more (mostly retail investors) with those holding upwards of 1,000 BTC (institutional investors). In all three charts, the black line is the bitcoin price and the orange line is the number of wallets in that category.
Retail investor behaviour
Institutional investor behaviour pt 1
Institutional investor behaviour pt 2
This shows that since the FTX scandal back in November, which led to the world’s second-largest crypto exchange collapse, retail investors have been buying bitcoin aggressively, resulting in the highest number of addresses holding at least one BTC ever. On the other hand, the biggest institutional investors have been offloading. This suggests that the institutional investors agree with our analysis.
There are those who argue that bitcoin is a bubble and that ultimately cryptocurrencies are worthless. That’s a separate debate for another day. If we assume there is a future for blockchains, which are the online ledgers that enable cryptocurrencies, the key question is when bitcoin will reach the accumulation phase that typically ends a bear phase in any market.
Known as Wyckoff accumulation, this is where the price of the asset repeatedly tests two areas: the upper bound where traders previously sold heavily enough for the price to stop rising (known as resistance), and the lower bound where traders bought heavily enough that the price stopped going down (known as support).
At the point where institutional investors decide the lower bound has proved to be sufficiently resilient – in other words, they think the price is cheap at that level – they will start buying the asset again. That moment is only likely to come after there has been a capitulation.
Of course, history does not repeat itself exactly. It may be this is the first time that retail investors have outsmarted the large institutions, and that the only way is now up.
More likely, however, there is more pain on the way. With a recession on the cards, unprecedented job layoffs and weak retail data coming out of the US, it doesn’t point to the kind of optimism that tends to move markets higher. It would therefore make sense to brace yourself for another plunge in the price of bitcoin and the rest of the crypto market.
James Kinsella works part-time as an investment analyst for Tyndall Asset Management.
Richard Fairchild does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Writers of speculative- and science-fiction often identify a key point in time and explore how a seemingly insignificant event might change the path of humanity.
One of these moments came in the 1970s when oil giant Exxon chose to ignore its own commissioned research on the impact of fossil fuels. A new analysis published in the journal Science has found that Exxon’s forecasts from that era have proven incredibly accurate, yet it did not act to prevent its own predictions from happening.
Instead, the company chose to maintain its role as an oil company and fund people to question the science and delay a coherent response. Staggeringly, in 1996 the company’s chief executive, Lee Raymond, referred to “the unproven theory that [fossil fuels] affect the earth’s climate”. The company, now known as ExxonMobil, denies the allegations, saying “those who talk about how ‘Exxon Knew’ are wrong in their conclusions”.
So what if the senior executives of Exxon had seen their own research as a business opportunity? Here’s one way things might have worked out.
Following the publication of terrifying research by Exxon in the late 1970s and the “energy crisis” in 1979, the policy direction of the US changes forever.
Nasa’s earth sciences funding is soon increased. The agency responds enthusiastically by launching several satellites which over the 1980s confirms the Exxon research beyond any reasonable doubt – the world is indeed warming, thanks to human-caused emissions.
Senator (and in this world future president) Al Gore invites Nasa’s James Hansen to present his findings, supported by the work of Exxon, to congress. As a result the US government commits to a net zero carbon economy by 2000. (A similar presentation happened in our world but, faced with greater scientific scepticism, it didn’t have much immediate policy impact.)
Following this, Exxon establishes a massive solar thermal power plant in the Californian desert. Unfortunately, complex engineering and intermittent energy production make it a challenging addition to the US energy grid. However, after ten years of research, the tech is exported to Egypt and Morocco where the output was more than enough to power these countries.
Further research results in enormous economic growth as the technology not only produces power but food through the use of seawater greenhouses. By 2000, North Africa is the main exporter of large solar power plants around the world. This economic success is matched in northern Europe with government-supported firms developing offshore wind turbines and tidal power throughout the 90s.
Back in the US, Exxon teams up with General Motors to develop in the late 1980s the first production electric vehicle, the EV1. (This existed in our world too, but not until a decade later). The car uses Nasa-patented batteries and space-age materials to produce cars that outperform petroleum vehicles in every area but extreme range.
Exxon’s PR machine devises a “plugging into the Sun” programme promoting micro rooftop solar panels that refuel the EV1s for free. Millions of systems are manufactured and installed by subsidiaries of Exxon making it the wealthiest “energy” company on the planet.
The micro-grids developed for car charging are also suitable for developing countries without large electrical grids. A second wave of development occurs, this time driven internally by countries across the southern hemisphere. Exxon is held up as alleviating extreme poverty across the world and improving the lives of billions.
By the late 1990s, huge “liquid metal” batteries allow inter-seasonal energy storage, creating an energy reserve sufficient to allow the roll out of large wind and solar projects around the world. This makes coal and oil too expensive for energy production and its use is ramped down and eventually put into the history books by 1997.
The use of petroleum and gas does continue in the domestic sector, but construction moves beyond the need for active heating and cooling by the end of the decade and use of petroleum cars is seen as a quaint hobby for those that wish to use this very risky fuel.
The age of oil is not entirely over. Demand for petrol continues at a level that oil companies are still able to make a small profit (environmentalists claim the oil companies are making “gas cars” cool so they don’t lose their final market).
However, seeing the opportunity for the manufacture of gasoline, many renewable energy firms begin the manufacture of “synth oil”, another space age output. The mineral oil companies push back but are unable to compete with the extremely low energy prices of synth oil as it uses virtually free energy from renewable energy systems off-peak.
By the 2000s, human society produces barely any greenhouse gases for manufacturing, transport or energy. Things are not perfect, and there are concerns about poverty, conflict, resources running out and the ecological impact of 8 billion humans and their dietary choices. The challenge for a stable, sustainable human society continues.
But climatic collapse – as we understand it in our world today – has largely been avoided.
And Exxon? Much like in our own timeline, Exxon is one of the world’s largest companies. But its massive rollout of distributed solar systems has also made it one of the world’s most liked companies.
In our world, former US vice president Al Gore won the Nobel peace prize in 2007 together with the UN’s climate advisory body, the IPCC. In this world, Gore still gets a Nobel for his work in the 1990s, but shares it with Exxon CEO Lee Raymond – there is less need for an IPCC as scientists were listened to three decades previously.
John Grant does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Older and middle-aged women are having their moment in the sun, it seems. The recent Golden Globes coverage was filled with images of “older” women on the red carpet. There were some notable wins too.
Angela Bassett, Michelle Yeoh and Jennifer Coolidge, all in their 60s, won their respective categories and in their speeches addressed the significance of receiving these awards later in their careers. The recently announced Oscar nominations also featured many older women, with four of the five nominations in the best actress category taken by women over 40 – including Yeoh and Cate Blanchett (53). Other categories also featured women over 60, like Jamie Lee Curtis (64) and Bassett for actress in a supporting role.
This has been heartening for many. In the past female actors have felt like there was an expiry date on their careers and it’s nice to now see women over 40 thriving in complex and exciting leading roles.
I remain sceptical about this becoming a long-term trend. Ageism is very deeply embedded in our society and it will not go away with several women in their 60s winning at the Golden Globes or being nominated at the Oscars.
After all, look carefully at the media coverage around this and you’ll notice that much of it is rooted in ageism. There was a slew of articles about older women having a “sartorial moment” at the Golden Globes. The underlying message here was that “they looked great despite their age”.
No one was talking about “older men” even though there were many in their 40s and older on the red carpet, and many articles about best-dressed men at the Golden Globes. Men do not require this classifier. Their ages were not typically mentioned in these articles. Women, on the other hand, are qualified by their ages and judged accordingly. Age is definitely not just a number for them.
Psychiatrist and gerontologist Dr Robert Butler coined the term “ageism” in 1969.
Ageism or age bias affects men and women differently. As I discussed at length in my book Sway: Unravelling Unconscious Bias, the research in bias has been slow to address issues of age-related bias and discrimination.
In a 2004 report by Age Concern in the UK, one in three people surveyed thought older people are “incompetent and incapable”. Explicit discrimination and bias are illegal but also increasingly frowned upon. Yet implicit biases against age persist.
Ageism is usually very subtle but there is evidence across a number of domains to show how it works in subversive ways. Much like racism and sexism, it counts on separating someone out for “difference” and lazy stereotyping.
Yes, the representation of women of all ages on our screens and in books will go some way towards countering this narrative. And there are organisations and filmmakers who are now challenging ageism in films and media.
But it wasn’t long ago that Maggie Gyllenhaal, in her late 30s, was told she was too old to play a love interest opposite a 44-year old man, and Jamie Denbo (43) was considered not young enough to play the wife of a 57-year-old actor and mother to an 18-year-old.
Gender plays a huge role in how those going through the ageing process are perceived. Women face more barriers as they grow older compared to men, a double whammy of sexism and ageism (also racism for women of colour). This is the “George Clooney effect”, or what economists call the “attractiveness penalty”.
While older women are called “hags”, men are still virile and called “silver foxes” as they grow older. Grey hair gives men like Clooney, Tom Jones and Colin Firth an air of sophistication and distinction. By contrast, a 2021 study found that women with grey hair were considered less competent.
It’s not only hair. The internet erupted when 50-year-old ex-model Helena Christensen stepped out in a lacy bustier. Former Vogue editor Alexandra Shulman wrote of the model, “something you wore at 30 will never look the same on you 20 years later. Clothes don’t lie” and called her “tacky”. There are stereotypes as to how older women should act and behave.
A study from the University of Southern California of the films nominated for best picture between 2014 and 2016 showed that only 11.8% of the actors were 60 or older, but significantly 78% of the films had no older women in leading or supporting roles.
In 2016, an exhaustive study on film dialogue from screenplays of over 2,000 films across genres found that the percentage of dialogue available to women decreased significantly with age compared to men. Men over 40 had more roles and spoken dialogue (55 million words for the 42–65 age group) compared to women in the same age group (11 million words).
Women have more cosmetic treatments targeted at them. We see social media ads for shrink-wrapping our necks and many brands of anti-wrinkle creams lining shop shelves. There’s no way to escape this fear and anxiety of being “past it” and not being considered relevant.
We are often complicit in our own marginalisation too as we grow older through the implicit bias we harbour about old age. In fact, language matters too even when we think we are trying to counter some of these biases. When someone says “you are only as old as you feel” or the phrase “young at heart” or that they “don’t feel old” they are displaying some of these implicit biases and fears associated with ageing.
Ageism is a reality. And it affects women much more than men, due to the intersection of gender and age.
The only way to address this is through a collective commitment and action and acknowledgement of externalised and internalised forms of ageism. And media has to take a huge responsibility for how it perpetuates and reinforces ageism in our society through words and images. Until then age will not just be a number. Especially for women.
Pragya Agarwal does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
China stuck rigidly to a zero-COVID policy until December 2022. This included travel restrictions, mass testing and mandatory quarantines. The rapid lifting of this strategy led to a surge of COVID infections across the country.
There have been concerns that the Chinese lunar new year travel in January may cause this wave of COVID to spread much further and faster, with significant numbers of hospital admissions and deaths.
Lunar new year involves hundreds of millions of people travelling across the country, and is considered to be the world’s largest annual migration event.
So how have things been tracking in China, and how will lunar new year trips affect COVID transmission? Our modelling may provide some clues.
This year lunar new year fell on January 22, though population movements for the celebrations began on January 7 and will run until February 15. Domestic travel was expected to peak around January 19.
According to estimates from the Chinese Ministry of Transport, the total number of lunar new year travellers is expected to have increased by 99.5% over the same period in 2022 and returned to 70.3% of what it was in 2019.
Through WorldPop, a research group based at the University of Southampton which maps global population distribution for health and development, we have continued to analyse population movements and their relationship to COVID transmission throughout the pandemic. Our earlier research indicated that lunar new year movements contributed significantly to the initial spread of the virus in January 2020.
This new wave has largely been driven by the omicron sub-lineages BA.5.2 and BF.7. We used an epidemiological model to simulate the transmission of these omicron variants across 339 areas in mainland China from November 1, 2022 to February 28, 2023.
This work has not yet been peer reviewed but our model estimated changes in the number of susceptible, exposed, infectious and recovered or isolated people within each area and their daily movements between areas. We incorporated numerous different sources of data, including intracity and intercity mobility data, vaccine uptake data by province, and COVID-related search index data on the Chinese internet search platform Baidu.
An important element of our model is the R value, which indicates how many people on average one infected person will infect in a susceptible population. We estimated R using reported case information and other data.
We compared the results of our model with online survey data on COVID infections, and we tested different R values and epidemiological parameters to better assess the uncertainties around our estimates.
Baidu searches with the term “fever” showed that most Chinese areas reached a peak in searches around December 20.
Baidu searches for ‘fever’
Based on this and other data, and an R value of 10, further adjusted by intracity mobility data, our model estimated that COVID infections nationwide peaked around December 26 to 28. At that time, roughly 4.2% of the Chinese population were probably infected, as shown in the figure below.
Estimated COVID infections in China
We also estimated that infections in 76% of areas peaked in December and 21% between January 1 and 10. The remaining 3% would reach the peak after January 10.
By December 31, we believe 73%–79% of all people in China would have been infected in this wave.
Estimated infection peaks by area
Our estimates under an R value of 10 are consistent with the recent reports released by the Chinese Center for Disease Control and Prevention (CDC). The CDC reported that the positive rate of COVID tests peaked between December 22 and 27 across the country. China passed the peaks of fever-related outpatient visits for both rural and urban areas (peaked on December 23), emergency department visits (January 2) and admission of severe cases (January 5).
Our results are also consistent with the findings of recent online surveys on COVID infections conducted in different provinces. For example, the Sichuan CDC in the western province of China reported that the overall infection rate of its residents had exceeded 80% by January 1, with a peak between December 12 and December 23. And Henan province in central China reported that its infection rate was 89% by January 6, after peaking on December 19.
Since most cities are estimated to have passed the peak of infections before January 10, and the majority of the population has already been infected, we expect the lunar new year travel will have a limited impact on the trajectory of COVID transmission in this wave across the country.
Of course, there may be subsequent waves of infections, for example in summer, due to waning immunity and the possible emergence of new variants.
We intend to refine our analysis with the latest data and publish a full report setting out our research in the coming weeks. But it’s important to note that at this stage, this work has not yet been peer-reviewed.
Whatever the precise estimates this and other models generate, it’s clear there are significant risks of severe disease and death among vulnerable groups such as the elderly. There’s also high pressure on health services, and relatively inadequate healthcare resources in rural areas. Measures like increased vaccine uptake in older people will be vital to ensuring the impact of COVID in China is reduced in future waves.
Shengjie Lai receives funding from the Bill & Melinda Gates Foundation, the National Institutes of Health, the EU H2020, and the National Natural Science Foundation of China. We collaborated with the School of Population Medicine and Public Health at the Chinese Academy of Medical Sciences in this study. The authors thank Dr Michael Head for providing insightful comments to improve this study and report.
Andrew J Tatem receives funding from the Bill and Melinda Gates Foundation, the EU Horizon 2020 program and the National Institutes of Health.
Mohandas Karamchand “Mahatma” Gandhi remains, even 75 years after his assassination, a useful symbol for many in India. For secularists, the leader of the country’s independence movement represents an imagined India of the past. For the current government, he is a means by which it can soften its international image.
In his 2002 essay, academic Ashis Nandy, mentioned four versions of Gandhi, who led India’s move from British colony to independent nation.
The first is the Gandhi of the Indian state and of official Indian nationalism. The second is a puritanical and sombre figure, apolitical and dependent on state funding, the subject of university seminars debating: “What would Gandhi do?”
The third is the “Gandhi of the ragamuffins”, opposing mechanisation, large-scale development and a high-consumption economy. The fourth is Gandhi the non-violent revolutionary, a worldwide phenomenon, influential in movements but no longer feared by tyrants, nor taken seriously by the left.
Over the past two decades, however, Gandhi and his legacy have taken a thorough beating.
Reappraisals of Gandhi are, admittedly, long overdue. Titles such as “Mahatma” (“high souled” or “venerable” in Sanskrit) and “Father of the Nation” have worn thin since his death, as new events in India and worldwide that brought new scrutiny to his life, work and politics.
Some of these seem far-fetched, for example equating Gandhi with Osama bin Laden and global jihadists on the grounds that they were similarly based on a “sacrificial humanitarianism”. Speculations about his sexuality provoked a debate about his supposed “celibacy”. In the aftermath of the #MeToo movement, his strange practice of sleeping next to naked young women was openly discussed.
The rise of much-persecuted Dalit people (previously known as untouchables) in political and intellectual spaces over the past two decades has given rise to trenchant criticisms of Gandhi’s complicity in the preservation of caste dominance, and the hypocrisy in his stands repeatedly that favoured the preservation of caste over justice and emancipation. Economist and politician Bhimrao Ramji “Babasaheb” Ambedkar’s evisceration of Gandhi’s politics is now more widely known and accepted than ever before.
Of those images of Gandhi named in the essay, some are now seen as enemies of the vision of progress of India’s current prime minister, Narendra Modi. Others have been refined to sit comfortably within the cultural nationalism of Hindutva, the project of creating a constitutional Hindu state and institutionalising its version of Hindu culture and social order in contradiction to Gandhi’s vision of a multi-faith nation.
Those using Gandhi’s methods of protest are now likely to be labelled “urban Naxal”, a Hindutva shorthand for intellectuals and activists involved in struggles of the rural poor, and have draconian legal charges slapped on them.
Gandhi’s international influence and reputation is now much diminished. Gandhi’s use of racist words for black Africans has fuelled righteous outrage against him. Malawi’s government stopped construction of a Gandhi statue after these accusations, though pressure from Modi’s government resulted in the completion of the statue later.
The most far-reaching bid to move India away from the nation that Gandhi imagined has come from India’s ruling Bharatiya Janata party (BJP) and its parent organisation, the Rashtriya Swayamsevak Sangh (RSS). The RSS was briefly banned after Gandhi’s killing for its involvement in the crime. It espouses a violent communal polarisation with an anti-minority politics, and several episodes of mob lynching with impunity, have been the fertile ground for its rise.
However, Hindutva organisations organise tableaux annually to re-enact the assassination on January 30 1948. Those elements of the RSS who supported “Gandhian socialism” are in political hibernation.
Modi, the Hindutva state and the new official nationalism, though, still need Gandhi. Under Modi’s modernisation fetish, major Gandhian ashrams, like Sabarmati, have been given such a tourist-friendly facelift, seemingly stripped of all historical gravitas.
Modi supports the construction of Gandhi statues worldwide. At the UN, Modi said he represented the land of Gandhi, claiming that erecting a bust at the UN headquarters was a matter or pride for all Indians. Modi’s Gandhian paradox is that the only Gandhi he wants to assimilate into his project is a Gandhi shorn of his core beliefs, principles and modes of political action.
Is the influence of Gandhi’s ideals finished then? Not quite. Activists from the anti-Citizenship Amendment Act movement (an attempt by Modi to end Muslims’ constitutional equality with Hindus) claimed to be follow Gandhian principles of popular protest. The farmers’ movement against Modi’s plans to give corporations power over Indian agriculture also tried to mobilise Gandhi’s legacy to their cause.
Perhaps, with the benefit of hindsight, there is a clearer picture now of the man, stripped of much of the myth and mystique. A resource for many social movements forging alternative ways to meet contemporary challenges.
Subir Sinha does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
When the US Department of Justice revealed on January 21 that its investigators had found classified materials in Joe Biden’s Delaware home, there was outrage – or, to be more accurate in most cases faux outrage – in Republican party circles. They wasted no time in demanding further investigation into what appeared to be a mishandling of classified documents.
Republicans see a double opportunity in the US president’s sloppy handling of what is reported to be a small number of papers from his days as vice-president. It was a God-given opportunity to embarrass a sitting president gearing up to launch his re-election bid. But many in the GOP hoped it would also take the heat off an outwardly similar investigation into former president Donald Trump.
Trump allegedly took thousands of classified documents to his Florida home, Mar a Lago, when he left the White House in January 2021 – a matter that has been under FBI investigation since 2022.
Both the current president and his immediate predecessor have been found in possession of classified materials which should have been passed to the National Archives and Records Administration (Nara). This has been US law since the passage of the Presidential Records Act in 1978, which states that any records created or received by the president as part of his constitutional, statutory or ceremonial duties are the property of the US government, to be managed by Nara at the end of the administration.
As a result, US attorney general Merrick Garland has appointed a special counsel to investigate each president’s actions. Jack Smith has been appointed to Trump’s case. Smith is a career prosecutor whose CV boasts a range of achievements including convicting gang members of killing New York cops, prosecuting a sitting US senator, and bringing war crimes cases at The Hague.
Robert Hur, the US attorney in Maryland during the Trump administration and now a litigation partner at a top Washington law firm, has been appointed to investigate Biden’s case.
While Garland has no power to indict a sitting president, the US Congress could impeach Biden if his actions are found to be a “high crime and misdemeanour”. But in Trump’s case, if he is found to have broken the Presidential Records Act after leaving office, he could face a fine or even a three-year jail term.
As you’d expect, the US media has been quick to compare Biden’s actions with those of Trump. Yet as of now, the cases appear very different.
In Biden’s case, investigators have reportedly found a very small number of papers – seemingly from his final year as vice-president – at his home and at the Penn Biden Center, a thinktank that the president founded in Washington DC. It has yet to be revealed how many documents there are or their level of classification.
As soon as they were unearthed, the Biden team handed them over to Nara and has cooperated with the authorities ever since, proactively inviting a search of Biden properties. Interestingly, a cache of similar papers has reportedly been found at the Indiana home of Trump’s vice-president, Mike Pence.
The contrast with Trump is stark. He left the White House with thousands of pages of classified documents. Among the first batch recovered by Nara a year after they were discovered were documents described by national archivist Debra Steidel Wall as:
Classified national security information, up to the level of Top Secret and including Sensitive Compartmented Information and Special Access Program materials.
Rather than acquiesce to Nara’s demands under the law, Trump refused to return them, had to be raided by the FBI for the state to get them back, and then fought in court for months to keep them.
It’s not clear why Trump took these documents. Speculation ranges from covering his back to seeking financial gain by using the materials in post-presidential dealings. It’s also possible he may have been trying to preserve his reputation prior to launching his third bid for the presidency.
So far then, two very different actions by the two most recent incumbents of the White House. However, for all Biden’s insistence in following the process, he made one crucial political misstep that could dog the remainder of his first term in office.
On November 2 2022, Biden’s personal lawyers found the first batch of classified documents from the Obama-Biden era locked in an office that Biden had used since leaving office.
They informed Nara the same day, and its officials took possession of the papers the following day. This was five days before the crucial US midterm elections – yet Biden did not go public about the find until January 9 2023, having been tipped off by CBS News that it was running the story.
This was manna from heaven for Republicans. The party had failed to achieve the massive gains it had expected in the midterms, and was disheartened by Trump’s lacklustre return to the campaign trail. Biden’s approval rating, meanwhile, had ticked up six points from a July 2022 low of 38%. So, with new House speaker Kevin McCarthy in place, it was a chance to raise a stink in Congress at the very least.
Why did Biden wait? Undoubtedly, he recalled the devastation to Hillary Clinton’s 2016 presidential bid when the then FBI director, James Comey, announced a week before the election that he had reopened the investigation into Clinton’s use of a private server to send classified emails while secretary of state, potentially breaking the Federal Records Act of 1950.
While Comey confirmed to Congress two days before the election that Clinton had no case to answer, the damage was done. Clearly Biden didn’t relish his own Clinton moment as a knife-edge midterm approached.
It is unlikely Biden will face charges over the papers found so far. But the discovery of any more caches of documents would be highly damaging for the president. And that’s the last thing the Democrats need if he plans to run in what is likely to be a close and rancorous 2024 election.
Mark Shanahan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The arrest of Matteo Messina Denaro, one of Sicily’s most infamous mafia bosses, has reminded many Italians of the extreme violence he was associated with when operating as a leading figure of Cosa Nostra.
Denaro appears to belong to another time – when the mafia brutally killed at will. And it is indeed true that the period of extreme violence with which he is associated has been confined to the past. But that does not in any way mean Italy’s organised crime groups have disappeared in the 30 years Denaro has been in hiding – they’ve just had a rethink about how they operate.
The Italian mafia has drastically reduced the number of homicides it carries out. Violence is now used in a much more strategic and less visible way. Rather than bloody and conspicuous murders, the modern mafia intimidates with crimes that are less likely to be reported to the police – such as arson and physical assault or sending threats. Murder is now a last resort.
The violent conflict between the Sicilian mafia and the Italian state reached its climax in the early 1990s. This was a period characterised by massacre after massacre, including the notorious bombing on Via D'Amelio in 1992 that killed magistrate Paolo Borsellino and five members of his entourage. In 1991 alone, there were 1,916 homicides – 718 of which were of a mafia nature.
The media covered every twist and turn. Politicians spoke in parliament about the scourge of organised crime. Mafia activity occupied a significant place in Italy’s public discourse and cultural imagination.
The authorities reacted with force. New laws were enacted, such as the “41-bis” prison regime, which included the threat of solitary confinement for members of organised crime gangs. A local municipality could be stripped of its powers for up to two years if local officials were thought to be working with the mafia, and a nationally appointed technocratic administration installed to clean house. A national anti-mafia directorate was also created so that more resources could be dedicated to the fight against organised crime.
In the years that followed, data shows a radical decrease in the number of mafia-related homicides, from 718 in 1991 to just 28 in 2019. In 2020, there were 271 homicides in Italy, compared with almost 2,000 in 1991. With 0.5 homicides per 100,000 inhabitants, Italy now has the fewest homicides in Europe after Iceland and Slovenia – fewer homicides per capita than Norway, Switzerland or Luxembourg.
At the same time, an interesting trend can be identified. In ongoing research, I’ve been analysing the archive of RAI (Italian National Television) over the past 40 years and studying the content of national and regional news bulletins. It’s clear that in years with more mafia homicides, media coverage related to the mafia increases, measured by the percentage of news on the mafia topic.
Conversely, when mafia homicides decrease, the topic is talked about less and there are fewer interventions in parliament. For example, between 1992 and 1994, organised crime was cited in 15% of speeches by parliamentarians. Within 20 years it was being mentioned in just 4.3% of speeches.
In other words, the more the mafia openly kills, the more attention it attracts from the media and politicians. It’s important to note that these are not necessarily years in which the mafia has been any less active in other ways. The smuggling, racketeering and corruption continues unabated. Only the most noticeable violence is in retreat.
All of this suggests that the decrease in the number of homicides could, at least in part, be a strategic choice. Criminals have worked out what they need to do to fly under the radar and be left to their own devices.
This does not mean that violence is no longer used – it is simply more targeted. As reported every year by the anti-mafia charity Avviso Pubblico, local administrators are now the main targets of the mafia. They are sent threatening letters and are treated with aggression in person at a rate of about one incident per day. This phenomenon goes almost unnoticed by the media, which would surely pay attention were a member of the national parliament face intimidation or violence. At best, local officials might see their experiences reported in the local press; it’s rare for such incidents to be reported on at a national level.
The mafia thereby neatly achieves its goal of influencing local politics without attracting media and political attention. Election periods are particularly delicate: mayors are subject to the most threats at these times, particularly in the period immediately after taking office, as local criminals see an opportunity to take control of the newcomer.
This strategy has facilitated the mafia’s economic expansion. While the number of murders has declined, the number of properties and businesses seized from the mafia has ballooned – again suggesting that a drop in violent crime is not necessarily an indicator of a drop in other types of criminal activity. In 1991, the state seized two companies and four properties from the mafia. In 2019, 351 companies and 651 properties were seized.
These figures could be read as indicating that law enforcement is doing a better job of identifying economic crime, and that could indeed be the case. But other data lends weight to the more pessimistic interpretation of the facts.
In 2019, assets relating to organised criminals were seized in 11 Italian provinces (largely in the northern regions) that had never previously experienced mafia activity. And today, each police operation related to organised crime leads to seizures of about €1 million (£880,000). At the end of the 1990s, the average value was about €50,000.
This suggests that far from being in retreat, the mafia is expanding into new areas of the country, and finding more lucrative opportunities as it goes.
Gianmarco Daniele does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The UK government recently announced the results of the second round of successful bids for for its £4.8 billion Levelling Up Fund. This money is provided to local governments with the ambitious (but pretty unspecific) aim of “creating opportunities for everyone” by addressing economic and social imbalances across the UK.
Winning projects have received as much as £50 million. In this round, the money will be used for ventures including building Eden Project North on Morecambe’s seafront and improving railway infrastructure across the UK. Smaller grants will go to projects involving electric buses, theatre and castle renovations, and new leisure centres and affordable housing.
All of the applicants – whether they won funding or not – have one thing in common: they all participated in a competitive bidding process. And while most bids for funding were not selected (out of 529 applications, only 111 will receive levelling up money in this round), they all represent hundreds of hours of work by in-house specialists in local government, and sometimes paid external consultants as well.
Which is why it’s all the more disappointing for the losing bids. The almost 80% of local councils who were rejected not only lost a project in which they believed, but also the time, money and energy spent preparing the bid.
Now there will be no multifunctional square in Wigan, and Bradford can forget about its advanced robotics centre. Well, for now anyway. Local councils will get another chance to invest their time and money all over again, when they prepare bids for the next round of levelling up funding (at an as-yet unspecified date).
But research shows that there are ways to make the process more efficient and effective the next time around.
So-called “beauty contests” – as the process for winning such funding is often described – are ubiquitous in UK local government funding. Around a third of the more than 450 grant schemes identified by the Local Government Association involve competitive bidding.
The cost of preparing a typical application is estimated to be between £20,000 and £30,000. This is a lot of money at any time, but particularly as many local councils are experiencing unprecedented budget cuts.
According to the 52 pages of official guidance for the Levelling Up Fund, bidders had to explain how they would divide the requested amount into the three investment themes of the fund and their sub-categories. They had to provide explanations of why their project aligns with existing central government strategies and the various missions of the Levelling Up white paper. They also had to answer dozens of specific questions about the project, and complete a cost-benefit analysis over the lifetime of the investment.
But that’s not all. The bids then have to be read and evaluated by civil servants before going through several more rounds of ranking and tweaking by senior politicians, (who may well have their own objectives).
Asking for detailed business cases helps rationalise decision-making during these kinds of processes. Beyond the basic financial evaluation, a cost-benefit analysis aims to measure the broader economic value of each project.
Winning project Eden North in Morecambe claims, for instance, that it will indirectly lead to more than 1,000 new jobs in a deprived region by attracting 740,000 visitors a year.
So while useful, such assessments are often not very precise when comparing things as different as a railway upgrade in Cornwall with a city centre regeneration project in Yorkshire. Research also shows these tools often select the kinds of projects most likely to see cost overruns. And drawing conclusions about small differences between generally “good” projects in this way can be pretty meaningless.
Unfortunately, creating precise but meaningless rankings often happens when resources are scarce. Prospective students craft their best personal statements to get into their dream schools, and researchers submit lengthy proposals to access increasingly competitive grant money. But research shows these review processes are often no better than random, and unable to consistently rank good projects.
So why do we keep on ranking the unrankable? Streamlining bidding processes could save time and money by eliminating the bad projects, financing the outstanding ideas, and allocating the rest of the money randomly among the good ones.
However, experimental evidence shows this would be difficult in practice: bureaucrats and politicians like to be in control, even if the outcome is as good as random. Humans also like to interpret success as the result of hard work and not some sort of lottery.
In a recent large-scale experiment, I worked with Elias Bouacida, an assistant professor at Paris 8 University, on research which found that when given the choice, most individuals prefer to see their fate decided by a procedure that looks reasonable than by a lottery – even if they are aware that both are equally unpredictable.
A simple alternative – one that would be much more beneficial in terms of money and time saved on the bidding process – would be to replace competitive bidding with an allocation formula that assigns pots of money to local governments, letting them choose their own projects.
We could also offer fewer types of grant and allow applications to be re-used. Reducing application forms to a short cost-benefit analysis would help with this. And then applicants would simply need to trust in the imperfect outcome of a short but independent assessment by civil servants.
This would embrace the randomness of the outcomes, the current governmental preference for centralisation, and the human preference for the appearance of a reasonable process.
Renaud Foucart works for Lancaster University, a partner of Eden Project North in Morecambe.
When the Federal Reserve convenes at the end of January 2023 to set interest rates, it will be guided by one key bit of data: the U.S. inflation rate. The problem is, that stat ignores a sizable chunk of the country – rural America.
Currently sitting at 6.5%, the rate of inflation is still high, even though it has fallen back slightly from the end of 2022.
The overall inflation rate, along with core inflation – which strips out highly volatile food and energy costs – is seen as key to knowing whether the economy is heating up too fast, and guided the Fed as it imposed several large 0.75 percentage point interest rate increases in 2022. The hope is that raising the benchmark rate, which in turn increases the costs of taking out a bank loan or mortgage, for example, will help reduce inflation back to the Fed target of around 2%.
But the main indicator of inflation, the consumer price index, is compiled by looking at the changes in price specifically urban Americans pay for a set basket of goods. Those living in rural America are not surveyed.
As economists who study rural America, we believe this poses a problem: People living outside America’s cities represent 14% of the U.S. population, or around 46 million people. They are likely to face different financial pressures and have different consumption habits than urbanites.
The fact that the Bureau of Labor Statistics surveys only urban populations for the consumer price index makes assessing rural inflation much more difficult – it may even be masking a rural-urban inflation gap.
To assess if such a gap exists, one needs to turn to other pricing data and qualitative analyses to build a picture of price growth in nonurban areas. We did this by focusing on four critical goods and services in which rural and urban price effects may be significantly different. What we found was rural areas may indeed be suffering more from inflation than urban areas, creating an underappreciated gap.
Higher costs related to cars and gas can contribute to a urban-rural inflation gap, severely eating into any discretionary income for families outside urban areas, a 2022 report found.
Car ownership is integral to rural life, essential for getting from place to place, whereas urban residents can more easily choose cheaper options like public transit, walking or bicycling. This has several implications for expenses in rural areas.
Rural residents spend more on car purchases out of necessity. They are also more likely to own a used car. During the first year of the COVID-19 pandemic, there was a huge increase in used car prices as a result of a lack of new vehicles due to supply chain constraints. These price increases likely affected remote areas disproportionately.
Rural Americans tend to drive farther as part of their day-to-day activities. Because of greater levels of isolation, rural workers are often required to make longer commutes and drive farther for child care, with the proportion of those traveling 50 miles (80 kilometers) or more for work having increased over the past few years. In upper Midwest states as of 2018, nearly 25% of workers in the most remote rural counties commute 50 miles (80 kilometers) or more, compared with just over 10% or workers in urban counties.
Longer journeys mean cars and trucks will wear out more quickly. As a result, rural residents have to devote more money to repairing and replacing cars and trucks – so any jump in automotive inflation will hit them harder.
Though fuel costs can be volatile, periods of high energy prices – such as the one the U.S. experienced through much of 2022 – are likely to disproportionately affect rural residents given the necessity and greater distances of driving. Anecdotal evidence also suggests gas prices can be higher in rural communities than in urban areas.
As eating away from home becomes more expensive, many households may choose to eat in more often to cut costs. But rural residents already spend a larger amount on eating at home – likely due in part to the slimmer choices available for eating out.
This means they have less flexibility as food costs rise, particularly when it comes to essential grocery items for home preparation. And with the annual inflation of the price of groceries outpacing the cost eating out – 11.8% versus 8.3% – dining at home becomes comparably more expensive.
Rural Americans also do more driving to get groceries – the median rural household travels 3.11 miles (5 kilometers) to go to the nearest grocery store, compared with 0.69 miles (1.1 kilometers) for city dwellers. This creates higher costs to feed a rural family and again more vehicle depreciation.
Rural grocery stores are also dwindling in number, with dollar stores taking their place. As a result, fresh food in particular can be scarce and expensive, which leads to a more limited and unhealthy diet. And with food-at-home prices rising faster than prices at restaurants, the tendency of rural residents to eat more at home will see their costs rising faster.
Demographically, rural counties trend older – part of the effect of younger residents migrating to cities and college towns for either work or educational reasons. And older people spend more on health insurance and medical services. Medical services overall have been rising in cost too, so those older populations will be spending more for vital doctors visits.
Again with health, any increase in gas prices will disproportionately hit rural communities more because of the extra travel needed to get even primary care. On average, rural Americans travel 5 more miles (8 kilometers) to get to the nearest hospital than those living in cities. And specialists may be hundreds of miles away.
Rural Americans aren’t always the losers when it comes to the inflation gap. One item in rural areas that favors them is housing.
Outside cities, housing costs are generally lower, because of more limited demand. More rural Americans own their homes than city dwellers. Since owning a home is generally cheaper than renting during a time of rising housing costs, this helps insulate homeowners from inflation, especially as housing prices soared in 2021.
But even renters in rural America spend proportionately less. With housing making up around a third of the consumer price index, these cost advantages work in favor of rural residents.
However, poorer-quality housing leaves rural homeowners and renters vulnerable to rising heating and cooling costs, as well as additional maintenance costs.
While there is no conclusive official quantitative data that shows an urban-rural inflation gap, a review of rural life and consumption habits suggests that rural Americans suffer more as the cost of living goes up.
Indeed, rural inflation may be more pernicious than urban inflation, with price increases likely lingering longer than in cities.
Stephan Weiler receives funding from the US Economic Development Administration. He is affiliated with the Regional Economic Development Institute (REDI@CSU).
Tessa Conroy receives funding from the United States Department of Commerce Economic Development Administration in support of Economic Development Authority University Center (Award No. ED21CHI3030029 and CARES Act award no. ED20CHI30700477). Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Department of Commerce Economic Development Administration.
In the end there was no red wave. And there was no blue wave.
There was an independent wave.
Pollsters and pundits were counting on independent voters in the 2022 midterm elections to swing to the Republicans as they did in 2014 when Barack Obama was president. That’s when independent turnout in the midterms added up to 29% of all voters, and the GOP won an additional 13 seats in Congress.
Expectations for the 2022 midterm elections also were based on a similar pattern in the 2018 midterms, when Donald Trump was president. Independents then represented 30% of the voters, and they broke for Democrats 54% to 42%.
Almost the mirror image. But mirrors don’t always reflect reality.
Nationally, these nonaligned voters were 31% of voters in the 2022 midterm. Despite the fact that the sitting president was a Democrat, they broke for Democrats by 2 percentage points, according to Edison Research Survey. They voted for Democrats by far bigger margins in key states with competitive Senate races – by 20 percentage points in Pennsylvania, 11 percentage points in Georgia and 16 percentage points in Arizona, where independents were fully 40% of those who voted.
Independent voters in the 2022 midterms made a decisive difference in close elections.
This came as a surprise to many pollsters and pundits who had predicted that independents would break for the GOP. They chalked up the pro-Democratic leanings of these unaligned voters to independents’ distrust of Republicans’ eclipsing their anxiety and distrust about inflation and the economy.
Maybe so. But as someone who studies independent voters in the U.S., I believe pollsters got it wrong because so little is known about the voting patterns of independent voters.
The continuing flight of millions of voters from the Republican and Democratic parties is reshaping the nation’s political landscape in ways no one can control or even predict. It threatens the very basis on which campaigns and elections have been analyzed.
This is a challenge to how America has for generations thought about politics: that it’s a two-party game and people vote for the party they’re loyal to. With growing numbers of independent voters, that’s changing.
As outlined in our recently released book, “The Independent Voter,” my co-authors Jacqueline Salit and Omar Ali and I outline how political scientists and the media have been extremely skeptical and dismissive of independent voters. They often conclude that independents are uninformed, uninvolved “leaners” or “shadow partisans” who are likely voters for Democrats or Republicans but just don’t want to say so out loud.
We believe that conclusion is based on the two-party bias that is baked into the U.S. political system. That bias has misshaped the research and analytical tools used to understand this community of Americans.
Beginning in 1952, when individuals identified themselves to pollsters and researchers as independent voters, they were then asked a follow-up question: Did they prefer one party over the other?
Since most independents indicated a lean toward one of the two major political parties’ candidates, political scientists have labeled them as “leaners,” independents who are likely to vote for one party or another. Political scientists also created a category called the “pure independent,” which was used to describe the fewer than 10% of people who truly refused to say whether they leaned one way or another.
Based on our research, we believe that this conclusion is a fundamental misunderstanding of independent voters and their voting patterns. This misunderstanding has led to mistaken assumptions about this growing population of U.S. citizens who have chosen to distance themselves from the two major parties.
Currently, 42% of Americans identify as independents. This is the highest percentage of independents in more than 75 years of public opinion polling. They rarely numbered more than 20% of voters from 1940 to 1960.
The choice to identify as an independent is a meaningful one, especially so in these politically hyperpolarized times, when many Americans do not feel or no longer feel at home in either party.
This is the reason Arizona Sen. Kyrsten Sinema gave for her December 2022 decision to change her party affiliation from Democrat to independent. Sinema said she believes that “[e]veryday Americans are increasingly left behind by national parties’ rigid partisanship, which has hardened in recent years. Pressures in both parties pull leaders to the edges, allowing the loudest, most extreme voices to determine their respective parties’ priorities.”
Surprisingly, little research has been done to investigate the meaning and culture of political independence, including very basic research into independent voting patterns over time.
In our recently published research in the journal Politics & Policy, my colleague Dan Hunting and I analyzed American National Election Studies data on political identification and voting choices from 1972 to 2020.
We observed significant volatility in loyalty to party among independent voters over more than one election. We found that independent voters were not reliably tied in their votes to one party or the other. From one election to another, they voted for Democrats, then Republicans and back again.
We also found evidence that a sizable number of independents move in and out of independent status from one election to another and in many cases actually register as members of one party or another, sometimes differently from one election to the next.
We suspect this a function of the political candidates running at any given time. It also reflects the fact that many states don’t allow independents to vote in primaries, or otherwise restrict their participation in primaries by requiring them to choose a major party ballot in order to vote. Currently, independents are barred or restricted from primary voting in half the states. And a sizable number of independents are similarly locked out of presidential primaries and caucus voting.
Why does this matter?
We believe that classifying independent leaners as Republicans or Democrats mischaracterizes the partisanship of Americans and overestimates the rate of party voting. Most studies that find leaners are partisans simply do not account for a sizable number of independents who move in and out of independent status. Those studies also do not account for the voting patterns of independents over time.
In our research, we found that independents who vote as Democrats or Republicans in one election are often less likely to vote that way in the next election.
Which party’s candidates or initiatives they vote for often depends on specific candidates or issues on the ballot and on the political circumstances of any given election cycle.
Consequently, independents may have voted against the party in power in midterm elections for a decade. But when circumstances and options change, their voting patterns change, too.
This may well turn out to be a defining feature of being an independent: that individual candidates, issues and the broader social environment – not party loyalty – drive their choices.
Unpredictability characterizes independent voters in modern times. This is what gives them their power – and it is why a deeper understanding of this group is urgently needed.
Thom Reilly does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Scientists used to think power in animals played out in a tidy and simple way. Nature is a dog-eat-dog place. Rams butt heads in a thunderous spectacle, and the winning male gets to mate with a female. Bigger, stronger, meaner animals beat up smaller, weaker, more timid ones, and then walk, fly or swim away with the prize.
All that’s certainly going on in the wild. But the natural world, it turns out, is so much more interesting than simply squaring off in brutish battles. As in tales of palace intrigue, the quest for power among animals is subtle, nuanced, strategic and, dare I say, beautiful.
I’m an animal behaviorist and evolutionary biologist who has been studying complex social behavior in nonhumans for 30 years. As I describe in my book, “Power in the Wild: The Subtle and Not-So-Subtle Ways Animals Strive for Control over Others,” I have come to learn that many power struggles in animals look more like scenes from a Shakespearean drama than rounds in a boxing match.
To study the dynamics of power in nonhumans we need a definition. How do we gauge power in other species? I think of power as the ability to direct, control or influence the behavior of others in order to control access to resources. Using that definition, power pervades every aspect of the social lives of animals: what they eat, where they eat, where they live, who they mate with, how many offspring they produce, who they join forces with, who they work to depose and more.
For years, my former Ph.D. student Ryan Earley and I were obsessed with power and spying in groups of a tiny fish called the swordtail. So much so that Ryan ended up building his Ph.D. dissertation around these fish whose brains can sit comfortably on the head of a pin.
When two males in a group of swordtails meet, they often engage in a series of chases, followed by displays in which they twist their bodies into an S shape. If it’s not clear at that point who is top swordtail, the fish ram into each other. And if even that doesn’t settle matters, they circle each other, lock jaws and mouth-wrestle, thrashing about until a clear victor emerges.
Earley watched these pairwise power struggles for hundreds of hours and began to suspect he wasn’t the only one watching – other male swordtails seemed to be as well. To test that hunch, Earley took a page from the script of a spy thriller, where an unsuspecting target is watched from behind a one-way mirror.
He designed an experiment in which a pair of swordtails that were involved in aggressive interactions were on one side of an experimental tank and a spy fish swam freely on the other side. The spy and the combatants were separated by tinted glass that allowed the spy to see in but kept the pair of battling fish in the dark about being watched.
When spies were later paired up with the winner of the fight they’d watched, they stayed as far away as they could, which is just what a good spy should do when confronted with a potentially dangerous foe.
But what was even more interesting was how these 2-inch-long espionage agents processed what they had learned about the loser of the fight they’d watched. If a loser gave up quickly, spies later went after him. Alternatively, if the loser put up a good fight before capitulating, spies were much more cautious, dealing with that individual using the fish equivalent of kid gloves.
So, while there is a fierce physical component to power in swordtails, it’s subtle spying that adds nuance to the power dynamics in the group.
In their quest for power, animals don’t just spy on their rivals. They also change how they behave depending on who is watching.
Animal behaviorist Thomas Bugnyar has been studying this “audience effect” in one of the wiliest of birds, the raven. At a field station in the Austrian Alps, Bugnyar and his colleagues have been filming raven power struggles. These can be rather tame affairs, with one bird approaching and the other retreating. But on occasion they escalate into down-and-dirty fights, during which ravens resort to weaponry: their sharp beaks and claws.
From a raven’s perspective, Bugnyar and his team are spectators not worth paying any mind to. But audiences made up of other ravens are a different matter. If avian audience members are paying attention, they can potentially be manipulated to serve one’s interests.
Ravens on the losing end of a power struggle take advantage of that, modulating their defensive calls depending on exactly who is watching and listening. When the audience is made up of potential allies, including relatives and friends – meaning other birds the victim has strong ties to – ravens increase the rate at which they screech for help. Ravens nearby sometimes come to the aid of a victim who utters these calls.
Victims are not only paying attention to those who might help them, though, but also to audience members who might make their situation even worse by coming to the aid of the brute currently overpowering them. In order to draw as little attention to their unfortunate predicament as possible, victims reduce their call rates when an audience is composed primarily of birds who are likely to help their opponent.
The subtle undertone of this audience effect emphasizes the complex dynamics of power in nonhumans. There’s more to it than might makes right.
Ravens, swordtails and countless other species all over the planet demonstrate that human beings are not alone when it comes to employing every trick in the book to attain and maintain power. If you pay close attention and know what to look for, you can see and hear an animal kingdom replete with Machiavellian scenes of spies and actors, threats and bluffs – just as you watch our own species, on the news and in the office, connive, bluster and feint, all for the sake of power.
Lee Alan Dugatkin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Los Angeles had oil wells pumping in its neighborhoods when Hollywood was in its infancy, and thousands of active wells still dot the city.
These wells can emit toxic chemicals such as benzene and other irritants into the air, often just feet from homes, schools and parks. But now, after nearly a decade of community organizing and studies demonstrating the adverse health impacts on people living nearby, Los Angeles’ long history with urban drilling is nearing an end.
In a unanimous vote on Jan. 24, 2023, the Los Angeles County Board of Supervisors voted to ban new oil and gas extraction and phase out existing operations. It followed a similar vote by the Los Angeles City Council a month earlier. The city set a 20-year phaseout period, while the county has yet to set a timetable.
As environmental health researchers, we study the impacts of oil drilling on surrounding communities. Our research shows that people living near these urban oil operations suffer higher rates of asthma than average, as well as wheezing, eye irritation and sore throats. In some cases, the impact on residents’ lungs is worse than living beside a highway or being exposed to secondhand smoke every day.
Over a century ago, the first industry to boom in Los Angeles was oil.
Oil was abundant and flowed close to the surface. In early 20th-century California, sparse laws governed mineral extraction, and rights to oil accrued to those who could pull it out of the ground first. This ushered in a period of rampant drilling, with wells and associated machinery crisscrossing the landscape. By the mid-1920s, Los Angeles was one of the largest oil-exporting regions in the world.
Oil rigs were so pervasive across the region that the Los Angeles Times described them in 1930 as “trees in a forest.” Working-class communities were initially supportive of the industry because it promised jobs but later pushed back as their neighborhoods witnessed explosions and oil spills, along with longer-term damage to land, water and human health.
Tensions over land use, extraction rights and subsequent drops in oil prices due to overproduction eventually resulted in curbs on drilling and a long-standing practice of oil companies’ voluntary “self-regulation,” such as noise-reduction technologies. The industry began touting these voluntary approaches to deflect governmental regulation.
Increasingly, oil companies disguised their activities with approaches such as operating inside buildings, building tall walls and designing islands off Long Beach and other sites to blend in with the landscape. Oil drilling was hidden in plain sight.
Today there are over 20,000 active, idle or abandoned wells spread across a county of 10 million people. About one-third of residents live less than a mile from an active well site, some right next door.
Since the 2000s, the advance of extractive technologies to access harder-to-reach deposits has led to a resurgence of oil extraction activities. As extraction in some neighborhoods has ramped up, people living in South Los Angeles and other neighborhoods in oil fields have noticed frequent odors, nosebleeds and headaches.
The city of Los Angeles has no buffers or setbacks between oil extraction and homes, and approximately 75% of active oil or gas wells are located within 500 meters (1,640 feet) of “sensitive land uses,” such as homes, schools, child care facilities, parks or senior residential facilities.
Despite over a century of oil drilling in Los Angeles, until recently there was limited research into the health impacts. Working with community health workers and community-based organizations helped us gauge the impact oil wells are having on residents, particularly on its historically Black and Hispanic neighborhoods.
The first step was a door-to-door survey of 813 neighbors from 203 households near wells in Las Cienegas oil field, just south and west of downtown. We found that asthma was significantly more common among people living near South Los Angeles oil wells than among residents of Los Angeles County as a whole. Nearly half the people we spoke with, 45%, didn’t know oil wells were operating nearby, and 63% didn’t know how to contact local regulatory authorities to report odors or environmental hazards.
Next, we measured lung function of 747 long-term residents, ages 10 to 85, living near two drilling sites. Poor lung capacity, measured as the amount of air a person can exhale after taking a deep breath, and lung strength, how strongly the person can exhale, and are both predictors of health problems including respiratory disease, death from cardiovascular problems and early death in general.
We found that the closer someone lived to an active or recently idle well site, the poorer that person’s lung function, even after adjusting for such other risk factors as smoking, asthma and living near a freeway. This research demonstrates a significant relationship between living near oil wells and worsened lung health.
People living up to 1,000 meters (0.6 miles) downwind of a well site showed lower lung function on average than those living farther away and upwind. The effect on their lungs’ capacity and strength was similar to impacts of living near a freeway or, for women, being exposed to secondhand smoke.
We found evidence that oil-related contaminants, including toxic metals such as nickel and manganese, are getting into the bodies of the neighbors. This indicates contamination may be getting into the community.
Using a community monitoring network in South Los Angeles, we were able to distinguish oil-related pollution in neighborhoods near wells. We found short-term spikes of air pollutants and methane, a potent greenhouse gas, at monitors less than 500 meters, about one-third of a mile, from oil sites.
When oil production at a site stopped, we observed significant reductions in such toxins as benzene, toluene and n-hexane in the air in adjacent neighborhoods. These chemicals are known irritants, carcinogens and reproductive toxins. They are also associated with dizziness, headaches, fatigue, tremors and respiratory system irritation, including difficulty breathing and, at higher levels, impaired lung function.
Many of the dozens of active oil wells in South Los Angeles are in historically Black and Hispanic communities that have been marginalized for decades. These neighborhoods are already considered among the most highly polluted, with the most vulnerable residents in the state. Residents contend with multiple environmental and social stressors.
The city’s timeline for phasing out existing wells is set for 20 years, leaving concerns about continuing health effects during this period. We believe these neighborhoods need sustained attention to reduce the existing health effects, and the city needs a plan for a just transition and cleanup of the oil fields as the areas transition to new uses.
This updates an article originally published Feb. 3, 2022.
Jill Johnston receives funding from the National Institutes of Environmental Health Sciences.
Bhavna Shamasunder receives funding from the National Institute of Environmental Health Sciences and the 11th Hour Project.
Warning: this article contains spoilers.
From the widely panned Super Mario Bros. movie (1993) to Netflix’s Resident Evil (2022) releasing to decidedly mixed reviews, game adaptations have historically been cursed on both big and small screens.
HBO’s series based on the hugely successful PlayStation game The Last of Us, is the latest entry into this genre. Early indications from critics and viewers suggest it has broken the dreaded video game curse.
The series occupies a unique position. In 2013, when the game was released, post-apocalypses were incredibly popular science fiction worlds. In 2023, such pandemics, as we’ve discovered, hue closer to science fact.
The scene in which protagonists Joel and Ellie encounter a mass grave has a distinctly different impact when humanity has so recently had to grapple with such tragedies in the real world.
In the series, a child’s blanket links this scene to a flashback of mass evacuation in the wake of the Cordyceps (the fungus that evolves to infect humans) outbreak foreshadowing the series’ continuing exploration of the values of family, connection and community.
The Last of Us game released in 2013 among what critics have called the “dadification” of games – a period in which many releases focused on paternal protagonists.
This “dadification” was driven partly by maturing technology that allowed more complex stories to be told. Also, developers who had grown up playing games were maturing and starting families, including The Last of Us creative director Neil Druckmann.
The kinds of stories they wanted to tell matured too, resulting in games addressing parent-child relationships, including The Walking Dead (2012) and God of War (2018).
The theme of parenthood is prevalent in The Last of Us too. While Joel and Ellie’s relationship makes this clear, this theme extends to other characters including Joel’s brother Tommy, an expectant father. HBO’s adaptation takes this a step further by also briefly exploring Ellie’s connection to her mother.
The value of parenthood in the game unfurls into the show’s focus on family. Dialogue throughout the series reflects its importance: Joel reminding Tommy of their familial bond, a scientist who just wants to be with their family, the dying teenage bandit pleading to be returned to his mother.
The value of family extends to supporting characters who are exclusive to, or expanded upon in, the series. Brothers Henry and Sam share a bond in the series compared with the game’s portrayal of a surrogate parent-child relationship that complements Joel and Ellie’s.
The series further extends the game’s exploration of family by having Henry and Sam’s story intersect with new character Kathleen. The leader of the Kansas Quarantine Zone resistance movement, Kathleen has her own motivations surrounding her brother.
While family is a core concern of the show, the theme of connection is also explored. This can be seen in its many “found” families. Joel and smuggling partner Tess’ relationship gets more screen time than in the game, as does the short-lived Joel-Tess-and-Ellie family dynamic.
This extends to the series’ other couplings, from episode-length explorations of Joel’s friends and existing game characters Bill and Frank, to Ellie’s relationship with school friend Riley, to Firefly leader Marlene’s connection to Anna – a best friend with a pivotal story role.
Even the Cordyceps is not immune to the rhetoric of connection. The spores by which the fungus spreads in the game have been changed to fungal tendrils in the show. These tendrils connect all the Infected – the series’ version of zombies.
Step on a tendril in one place and you’ll wake a dozen Infected in another. The fungal spores in the game are an impersonal, environmental hazard. The series’ tendrils instead actively seek out new victims and in one unsettling scene, defile a fundamental act of human connection and love to achieve this.
What it means to be human in a world ravaged by a pandemic is also explored. The politics of peaceful communities is examined, from the militaristic Quarantine Zone where Joel first meets Ellie, to Tommy’s settlement – jokingly but truthfully derided as “communism” by Joel.
More important, perhaps, is the exploration of hostile communities that game players would typically shoot their way through. Kathleen’s control of the Kansas resistance group is given a two-episode arc that ends with Joel and Ellie burying Henry and Sam – a humanising end to their story the game did not afford.
The notion of burial as a human ritual is unearthed again a few episodes later when a girl asks in-game antagonist David, the leader of a group at Silver Lakes Resort, if her father can be buried – a request he denies.
The episode explores David and his group, humanising them more than in the game. This further humanisation then stands in stark contrast to a reveal that poses the ultimate question of where the tipping point is between human and monster.
These values are framed in relation to the show’s ultimate theme: love. Joel loved Sarah. Bill loved Frank. Kathleen loved her brother. David’s community loved him. This love, derived from the personal relationships found and strengthened amid chaos, breeds hope not only for the world portrayed in the show but also for our own.
A repeated motif in the series is the motto of the resistance group, the Fireflies: “When you’re lost in the darkness, look for the light.” In a world all too familiar with pandemics in 2023, this masterful adaptation of The Last of Us is something bright indeed.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
The presence of oil in Somaliland has been confirmed by a recent exploration. The discovery has raised the stakes in Somaliland’s claim for independence from Somalia as it holds the potential for a new stream of revenue for the semi-autonomous state. But the oil exploration is deepening the rift with Somalia, which claims sovereignty over the region. Michael Walls answers five key questions.
In 2020, Norwegian seismic survey company, TGS, estimated that the Somali basin as a whole likely holds offshore reserves of about 30 billion barrels, with additional onshore reserves, although land estimates are considerably less consistent. Assessments generally include Somaliland and would place Somalia reserves at about the same level as Kazakhstan, which would give the area the 18th or 19th largest reserve globally, as assessed in 2016.
Geological conditions seem to support the view that there are likely to be commercially viable deposits in the region. Whether they prove close to estimates remains unknown at this stage.
There is also evidence of offshore (undersea) reserves in the region, as well as onshore (beneath the land) in the Somali region of the neighbouring Ethiopia. Bordering Somalia, and located next to Oromia Regional State, the Somali Regional State (also Ogaden) is Ethiopia’s second largest federal region.
This find is being billed as the first discovery in Somaliland but in fact there have been several instances of oil seepage. An oil seep occurs when geological or unrelated human activity results in oil “seeping” into the ocean or onto land. In such cases, the physical appearance of oil occurs unexpectedly rather than as a result of deliberate exploration. It is unsurprisingly taken as evidence of a substantial reserve that is close to the surface, but doesn’t always indicate the presence of commercially viable quantities or accessibility.
Genel Energy, the UK oil exploration firm on whose concession this discovery occurred, has held rights to explore in Somaliland since 2012. So the find isn’t quite the sudden and unexpected bonus that’s been implied by some reports.
Progress has been slow because Somaliland’s lack of international sovereign recognition creates an uncertain context for significant investment. Somalia still claims sovereignty over Somaliland even though the region has operated as a fully if informally independent state since 1991.
This creates a vacuum. The Somali federal authorities cannot enter into meaningful agreements over exploration or extraction in Somaliland. Somaliland is limited by investment risk. And Somalia’s threats and complaints emphasise that risk.
This has not stopped Somaliland from entering into agreements, but it has slowed activities taking place under them.
In addition, there have been disputes within Somaliland over how the proceeds of hydrocarbon exploitation would be shared.
One of the areas with significant potential is the Nugaal Valley, which stretches across the border of eastern Somaliland into Puntland. Genel Energy was already exploring in that zone a decade ago. It withdrew for a time in 2013, citing security concerns. In the same time period, Africa Oil secured rights from the Puntland administration that overlapped with those issued by Somaliland to explore in the Nugaal Valley. A 2014 UN report expressed concern that hydrocarbon exploration in the Nugaal Valley risked fuelling violent conflict. Africa Oil ceased active operation in the area a year later.
The most recent find is in a different area of Somaliland: Salaxley in the Maroodi Jeex region, which is less politically volatile. This makes it more likely that Genel Energy will be able to advance its work.
The uncertainty created by a lack of international recognition makes it difficult to mobilise sufficient investment. And there is little doubt that Somalia will continue to remain hostile to both exploration and extraction.
Similarly, local sensitivities around the sharing of financial rewards will need to be managed with care and deep local engagement.
Some commentaries have suggested that the newly discovered oil could be abundant. But the reserves could also prove limited and may present technical challenges in extraction. It is therefore possible that extractive plans will operate at the margin of financial feasibility.
The latest find was the result of an accidental release of oil during drilling for water rather than from deliberate exploration. This may be evidence of a significant and easily accessed reserve, but seepages and strikes like this have happened in the past in Somaliland. A more extensive geo-seismic surveying will be needed before the full extent of the reserve is confirmed.
I had previously studied the place of oil in Somalia and its breakaway states . Somali society is kinship-based. Specific groups identify with particular geographic areas. This means that the political implications vary sharply depending on the location of any oil discovery.
Previous experience of exploration in the Nugaal Valley showed how socially and politically volatile the exercise could be.
The area of the latest find, around Salaxley, is likely to prove less volatile. Unlike the Nugaal Valley, Salaxley has not customarily been subject to the same inter-clan and political disputes. But there will still need to be significant negotiation over sharing of the proceeds of exploration. The government will be keen to ensure that the windfall advantages those in power. Local clan groups will be keen to ensure there is a clear benefit accruing to their communities. Other clans will equally want a say in how increased wealth benefits Somaliland as a whole.
Depending on how negotiations conclude, there is potential for this clan-based process to mitigate the “resource curse” effect. In other words, the system of inter-group negotiation that underpins Somali society might provide some protection from the narrow economic impact of oil wealth that has been felt elsewhere. However, that is by no means certain and the process of negotiation itself has the potential to fuel violence, just as the UN worried in 2014.
Either way, the Somaliland economy remains tiny. Any influx of significant new wealth, even on a fairly modest scale, will create new social, economic and therefore political tensions.
The regional impact will depend on the extent of the discovery. Somalia has consistently objected to hydrocarbon exploration in Somaliland as all concessions have been granted under Somaliland legislation. It would object even more strongly to commercial extraction.
Ethiopia’s interest is likely to be more equivocal. Salaxley is close to the Ethiopian border, and not far from active hydrocarbon exploration concessions in Ethiopia’s Somali region. If the Somaliland reserves prove to be extensive after a technical appraisal, it would suggest that those in the adjacent Ogaden Basin are also significant. In this case Somaliland and Ethiopia would hold a mutual interest in ensuring sufficient regional security to enable extraction.
Michael Walls has in the past received funding from the UK Economic and Social Research Council (ESRC), the Foreign Commonwealth and Development Office (FCDO) and other research funders to conduct research and consultancy. All such funding has been to undertake specified and time-limited research or consultancy work through UCL.
ITVX’s Deep Fake Neighbour Wars is the breakthrough in television’s use of artificial intelligence that experts in the cultural use of deepfakes like myself have been waiting for.
In this six-part series, celebrities have apparently invaded our everyday lives. Presented as a reality TV show, we meet suburban neighbours in Catford, south London. Idris Elba (handyman/delivery driver) takes pride in the garden behind his ground-floor flat, until new upstairs tenant Kim Kardashian (bus driver) starts to exercise her right to use the shared space. They recount the story of a dispute that ultimately turns to violence.
In a second storyline set in Southend, Greta Thunberg (single mum) has adopted the sunny coastal Essex town to escape the cold of northern Sweden, until she confronts neighbours Conor McGregor (florist) and Ariana Grande (scaffolder) – Christmas decoration fanatics with a permanent display of noisy, flashing reindeer in front of their bungalow. Cue high drama when Thunberg takes justice into her own hands.
It’s a brilliant play on the mockumentary, a genre that brought us film comedy classics such as This Is Spinal Tap (1984) and Borat (2006, 2020), and TV hits like Parks and Recreation (2006-2015).
Deepfakes sound suspicious just from their name, which tells us immediately that we’re being deceived. Many producers now prefer the term “synthetic media” to avoid this connection.
The major ethical issue with deepfakes is the idea that they’re trying to trick us – but this isn’t a problem when they’re used for obvious comedy.
It may look like Idris Elba is living an alternative reality in Catford, but we laugh because we know clearly that this isn’t the real thing. Deepfakes can twist and rejuvenate pop culture through their playfulness, while also challenging us to consider what we accept as real.
As philosopher Adrienne De Ruiter explains, “deepfake technology and deepfakes are morally suspect, but not inherently morally wrong.”
This year marks a watershed moment for deepfakes. The technology is at a cultural crossroads in which the primary use in its early years – non-consensual pornography – is being overshadowed by the technology’s adoption by mainstream popular culture.
Neighbour Wars follows on from other attempts to use deepfakes in television. In 2020, Channel 4’s Alternative Christmas Message featured Queen Elizabeth II speaking to the nation during the pandemic.
At that time, broadcast deepfakes were made by colossal visual effects (VFX) companies – Channel 4’s Christmas message was made by the UK’s Framestore, which also created the VFX for big movies including Doctor Strange and Fantastic Beasts.
Trey Parker and Matt Stone (makers of South Park) also tried out deepfakes in 2020. They posted a 15-minute spoof consumer rights programme, Sassy Justice, on YouTube. Its fictitious host, consumer advocate Fred Sassy, was played by a deepfake Donald Trump.
Sassy Justice was a true forerunner of Deep Fake Neighbour Wars, as it created multiple deepfake celebrities also including Julie Andrews and Mark Zuckerberg. The video’s hokey visual style spoofed low-budget daytime TV and joked with our gullibility, ensuring its audience was always aware of its AI origins.
Smaller online content creators have been the main innovators of deepfakes in pop culture. Corridor Digital was set up by two geeky guys from Minnesota – Sam Gorski and Nico Pueringer – who moved to Los Angeles to produce viral videos.
When deepfakes emerged, they jumped on the technology and produced a breakout video, Keanu Reeves Stops A Robbery, in 2019. As deepfakes expert Lisa Bode writes, Corridor Digital’s work demonstrated how the technology was “widely available and now affordable, or even free, in the case of open-access deepfake generation face replacement apps like Faceswap”.
The team’s YouTube channel now features dozens of short deepfake videos, many of them boasting how they can do visual effects better than Hollywood studios.
Chris Ume is a Belgian deepfake creator who stunned the online world in 2022 when he produced short videos of Tom Cruise at such a high level of resolution and believability that only the script reassured us they were fake.
Ume has taken his deepfake expertise into the world of mainstream TV. In August 2020, he entered America’s Got Talent with collaborator Tom Graham. The pair brought singer Daniel Emmet on to the stage and rolled a TV camera directly in front of him. When Emmet began to sing, his image on the massive screen above was deepfaked live into that of Simon Cowell performing You’re The Inspiration.
The delighted audience and judges progressed the act to the show’s final. As with comedy, this use of deepfakes avoided any sense of deception (the real Simon Cowell was sitting with the jury, aghast).
The music industry is set to become a rich area for deepfakes, and artists have already begun experimenting with the technology.
Last year, Kendrick Lamar released The Heart Part 5, with a video using deepfakes to transform him into OJ Simpson, Nipsey Hussle and Kobe Bryant. Lamar’s groundbreaking work was quickly followed by fellow rapper Kanye West adopting deepfakes in his video for Life of the Party.
Like a magician’s act, deepfakes create wonder (and fear) – and like all new technologies, this AI generates a buzz. Deep Fake Neighbour Wars shows that deepfakes don’t need to remain as short online clips; they can now be used to make longform TV content. Expect ITV’s venture to be the tip of the iceberg.
Dominic Lees receives funding for his deepfakes research from the University of Reading’s Impact Acceleration Account, funded by the Arts and Humanities Research Council (AHRC), part of UK Research and Innovation.
Neglected tropical diseases are a group of communicable diseases found in tropical and subtropical regions of the world. They are classified as “neglected” because they have received little or no attention in terms of prevention and control for several decades. The World Health Organization guides the way they are identified and managed.
These 20 conditions mostly affect impoverished communities, women and children. Most people affected by them live in rural areas where houses are overcrowded, and basic infrastructure such as water and toilet facilities are lacking. More than one billion people are estimated to be affected globally.
The neglected tropical diseases include onchocerciasis, schistosomiasis, lymphatic filariasis, soil-transmitted helminth infections and trachoma. Also among them are dengue fever, leptospirosis, trypanosomiasis, leishmaniasis, Buruli ulcer, leprosy and snake-bite envenoming.
More than 170,000 people die of these diseases annually – fewer than malaria with 627,000 deaths in 2020. But the diseases can cause disfigurement, stigmatisation, malnutrition and cognition problems, leading to a range of social, economic and psychological burdens for those affected.
Nigeria carries a particularly heavy burden. A quarter of the people affected by neglected tropical diseases in Africa live in Nigeria. An estimated 100 million people in the country are at risk for at least one of the diseases and there are several million cases of people being infected with more than one of them.
As an epidemiologist who has studied some of these diseases for 21 years and provided technical support for control activities, I can say that Nigeria has made progress in controlling them. The country has eradicated Guinea-worm disease and two states have eliminated onchocerciasis. But it can still do more.
Other diseases are still endemic in Nigeria. There is a National Neglected Tropical Diseases steering committee overseeing control efforts. There are also control units at the federal, state and local government levels. Local and international donors are helping as partners. Progress has been made in mapping of the diseases, development of master plans and the delivery of intervention.
The WHO puts efforts to control the diseases into two categories: prevention and management.
Preventive control is about administration of efficacious, safe, and inexpensive medicines. The diseases that can be prevented this way include onchocerciasis, schistosomiasis, lymphatic filariasis, soil-transmitted helminths and trachoma. They are the most common in sub-Saharan Africa.
Diseases that lack appropriate tools for large scale use are managed case by case.
In 2012, pharmaceutical companies, donors, endemic countries and NGOs signed the London Declaration on Neglected Tropical Diseases. They committed to control, eliminate or eradicate 10 priority diseases by 2020.
In 2020, World Neglected Tropical Diseases Day was declared, to be marked on 30 January every year.
The various global initiatives have built capacity for African scientists through research grants, and created awareness and funding partnerships to meet the WHO 2030 elimination goals in Africa.
Nigeria began concerted efforts to combat human and animal trypanosomiasis (sleeping sickness and nagana) in 1947 with the establishment of Nigerian Institute for Trypanosomiasis Research, Kaduna. Large scale human onchocerciasis (river blindness) control efforts started in 1988. When drug efficacy evidence become available, the National Lymphatic Filariasis Elimination Programme was established in 1997.
Support for the procurement, delivery and distribution of medicines increased in the 1990s through donor programmes. Control units were established at the Federal Ministry of Health, and all 36 states were given the responsibility to implement control activities using recommended medicines.
To reach the marginalised populations who bear the greatest burden of these diseases, volunteers visit from door to door to administer medicines to people in their community. Teachers also played similar role where the drug distribution is school-based.
These interventions are supported through the national budget, bilateral aid and direct support from development partners. Medicines are donated by pharmaceutical companies, and deliveries are coordinated by the WHO.
The treatment data for human onchocerciasis and lymphatic filariasis (elephantiasis) from 2014 to 2021 showed progress in the number of people treated and achieving WHO treatment coverage of 65%. However, for schistosomiasis (bilharzia) and soil transmitted helminthiasis (intestinal worms), Nigeria has not been able meet the recommended coverage of 75% set by WHO.
This shows that the control and elimination of these diseases are in progress.
The lowest coverage was recorded during the COVID pandemic 2020 and 2021.
Two states (Plateau and Nasarawa) have interrupted the transmission of onchocerciasis. A number of local governments are near elimination stage – 61 in 2021. This shows that the disease is under control.
Lymphatic filariasis is also on a downward trend, but only 37 local government areas are nearing elimination. The disease is found in 520 local governments out of 774 in Nigeria.
For schistosomiasis, treatment coverage has been below the WHO target. This is largely due to inadequate drug supply and the challenges of treating children in and outside the school system. The WHO introduced new guidelines on control and elimination in 2022. The road map targets the elimination of schistosomiasis as a public health problem, globally. The new guidelines also recommended the implementation of other interventions such as provision of water, sanitation and hygiene education (WASH), behavioural health education and snail control to break the transmission of schistosomiasis in affected communities.
For soil transmitted helminthiasis, 117 local government areas have achieved more than 75% treatment coverage out of the 147 targeted for treatment.
Nigeria has taken massive strides towards reducing trachoma prevalence.
Preventive control of neglected tropical diseases relies on mass administration of drugs. This requires substantial financial and human resources. More importantly, effective communal participation is vital. But there is low public awareness about these diseases and the efforts being made to control them.
The shortage of medicines, poor financial support and material logistics for treatment campaigns are not helping control and elimination efforts. Additional challenges are poor political will, lack of NGO partner in some states, and apathy among drug distributors and health workers due to lack of incentives. These challenges got worse during the pandemic.
Government and stakeholders at all levels should commit to control activities through increased funding. There should also be sensitisation of citizens through advocacy to support control activities in their communities. It is important that Nigeria should enact legislation to drive and scale up control activities. Otherwise the country would be left behind when these diseases have been controlled or eliminated in the rest of sub-Saharan Africa by 2030.
Uwem Friday Ekpo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Each year, more than 2 million people in the UK have troublesome earwax that needs to be removed. However, more people are finding that this service is no longer being provided at their GP surgery. In fact, 66% of people seeking these services have been told that earwax removal is no longer available on the NHS.
Questions have been raised in parliament about why people are being referred to earwax clinics in hospitals. This results in long waiting times and is not the best use of specialist services.
Many people are resorting to using private services on the high street that cost around £50 to £100. But the Royal National Institute for Deaf People (RNID), a charity, reports that more than a quarter of people they surveyed cannot afford to pay to have their earwax removed privately. This especially applies to people requiring recurrent earwax removal, such as those who wear hearing aids and earbud earphones – which tend to cause impacted earwax.
Our bodies produce earwax to clean, protect and keep our ears healthy. Movement of the jaw, as well as the skin that lines the ear canal, causes the wax to move to the entrance of the ear where it then flakes off or is carried away when we wash. Sometimes this doesn’t work and the earwax becomes impacted. Impacted earwax that blocks the ear canal is a major reason for GP consultations.
The National Institute for Health and Care Excellence (Nice) is clear that NHS earwax removal services should be provided in the community where demand is greatest. Why is this recommendation for community earwax removal services falling on deaf ears?
A recommendation from Nice is not a mandate, and GPs are under no obligation to offer an earwax removal service. There are several reasons this service is often no longer offered in primary care, some of which are based on misunderstandings.
First, manual water-filled syringes for flushing out earwax can cause high pressure of water and might damage the patient’s ears – not something a GP wants to be responsible for doing. (Alternative cheap, low-pressure water irrigation devices are now widely available.)
Second, there is a mistaken belief among some GPs that earwax can be self-managed using wax-softening ear drops on their own. However, there is no good quality evidence that softened earwax dissolves and magically disappears into the ether.
The most common symptom caused by impacted earwax is hearing difficulty. This is often accompanied by discomfort and noises in the ears. Healthwatch Oxfordshire, a charity, revealed that adults with earwax required between one and four NHS visits before attending a dewaxing clinic and that the time from first experiencing symptoms to final resolution was three to 30 weeks.
Try simulating the effect of impacted wax by walking around with your fingers firmly plugging both of your ears for a few days. You’ll soon realise that what at first sounds trivial is no laughing matter.
Hearing difficulty means you can’t communicate with ease or listen to the TV. It also reduces your ability to detect and monitor sounds in the environment, such as an approaching car. Hearing difficulty can lead to social isolation and depression. More than nine out of ten people report that impacted earwax was at least moderately bothersome to them, and 60% said it is very or extremely bothersome.
Nice recommends that impacted earwax is removed by irrigating the ear with the newer, safer low-pressure water irrigation devices, or microsuction to hoover it up. When questioned, most people do not have a preference, although some report that water irrigation is messy and others that microsuction causes discomfort and is noisy.
Removal of earwax in health centres using microsuction results in levels of patient satisfaction that are at least as good as those provided in a hospital.
Before removal, pre-treatment drops or sprays are used to soften the earwax. These are applied daily for up to five days before removal. There is a vast array of pre-treatment earwax softening products, but none are better than any other. As a result, most people use olive oil, which can be administered as drops or as a spray.
There are a variety of self-administered, earwax management products on the market but the evidence for these is limited and none are currently recommended by Nice. An example is the use of Hopi ear candles or cones. To use these, you lie with your head on one side and place the lit candle in the upward-facing ear.
These are reported to work by softening the wax and then sucking it out of the ear canal and up the cone like a chimney. There is no evidence to support this claim. These candles and cones cost money and are ineffective.
If individual GP surgeries lack the expertise or funding to provide earwax removal services, an alternative is for groups of practices to collaborate as a network. The portable nature of modern wax removal equipment is ideal in such settings and for use in home visits. This approach could be especially valuable for vulnerable people, such as those in care homes where 44% of residents with dementia also have impacted earwax.
In the meantime, the withdrawal of NHS earwax removal services is having a far-reaching impact, with people experiencing bothersome and distressing symptoms, sometimes leading to poor mental health.
Kevin Munro does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Treasurer Jim Chalmers has laid out an economic blueprint for pursuing “values-based capitalism”, involving public-private co-investment and collaboration and the renovation of key economic institutions and markets.
In a 6000-word essay in The Monthly titled “Capitalism after the crises”, Chalmers declares the Labor government wants “to change the dynamics of politics, towards a system where Australians and businesses are clear and active participants in shaping a better society”.
Chalmers’ essay looks to the future after the uncertainties of three global crises - the GFC, the pandemic, and the current energy and inflation shock.
The essay comes 14 years after then prime minister Kevin Rudd’s essay in The Monthly on the GFC, and will be seen in terms of Chalmers’ longer term leadership ambitions as well as his directions as treasurer.
While the three crises have been very different, Chalmers writes, their common thread is “vulnerability. In each case our communities, economies, budgets, environment, financial and energy markets, international relationships, and our politics – already fragile enough – became more so.”
Chalmers says Australia’s current economic outlook is being largely shaped by the war in Europe, how China emerges from its COVID-zero policy, potential recessions in large northern hemisphere economies, domestic interest rate rises, and the uncertainty of future natural disasters.
Australia’s growth is expected to slow considerably this year, and unemployment is expected to rise from historic lows.
“But Australia can do more and do better than just batten down the hatches in 2023 or hope for the best,” Chalmers writes.
“We can build something better, more meaningful and more inclusive.”
Doing so relies on three objectives: an orderly energy and climate transition; a more resilient and adaptable economy, and growth that puts equality and equal opportunity at the centre.
“How do we build this more inclusive and resilient economy, increasingly powered by cleaner and cheaper energy?
"By strengthening our institutions and our capacity, with a focus on the intersection of prosperity and wellbeing, on evidence, on place and community, on collaboration and cooperation.
"By reimagining and redesigning markets – seeking value and impact, strengthening safeguards and guardrails in areas of unchecked risk.
"And with coordination and co-investment – recognising that government, business, philanthropic and investor interests and objectives are increasingly aligned and intertwined.”
Stressing the need for open thinking, Chalmers foreshadows that “a depoliticised and more regular” Intergenerational Report will give a clear sense of Australia’s long term future, a Tax Expenditure Statement will provide more transparency about budget pressures, and the Employment White Paper will plan for a highly skilled work force.
Chalmers says the government will “renovate” the Reserve Bank, and “revitalise” the Productivity Commission.
“These institutions need to help deliver change in areas of disadvantage, to prod and inform and empower,” he says.
“It’s not just our economic institutions that need renewing and restructuring, but the way our markets allocate and arrange capital as well.”
In this, governments have a leadership role, not in “picking winners” but in “defining priorities, challenges and missions”.
One powerful tool for this is “co-investment”, Chalmers says, citing the role of the Clean Energy Finance Corporation.
Just as important is “collaboration” with the private sector. “There’s a genuine appetite among so many forward-looking businesspeople and investors for something more aligned with their values, and our national goals.”
Market design and disclosure are also important “to ensure our private markets create public value.”
Chalmers points to the clean energy sector as an example of how private investment increases when the government ensures there is first class information.
“So in 2023, we will create a new sustainable finance architecture, including a new taxonomy to label the climate impact of different investments. This will help investors align their choices with climate targets, help businesses who want to support the transition get finance more easily, and ensure regulators can stamp out greenwashing.”
The government will also try to expand “impact investing”.
“Across the social purpose economy, in areas such as aged care, education and disability, effective organisations with high-quality talent can offer decent returns and demonstrate a social dividend – but they find it hard to grow because they find it hard to get investors.
"Right now, the market framework that would enable that investment in effect doesn’t properly exist.”
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
When people think about artificial intelligence (AI), they may have visions of the future. But AI is already here. At its base, it is the recreation of aspects of human intelligence in computerised form. Like human intelligence, it has wide application.
Voice-operated personal assistants like Siri, self-driving cars, and text and image generators all use AI. It also curates our social media feeds. It helps companies to detect fraud and hire employees. It’s used to manage livestock, enhance crop yields and aid medical diagnoses.
Alongside its growing power and its potential, AI raises moral and ethical questions. The technology has already been at the centre of multiple scandals: the infringement of laws and rights, as well as racial and gender discrimination. In short, it comes with a litany of ethical risks and dilemmas.
But what exactly are these risks? And how do they differ among countries? To find out, I undertook a thematic review of literature from wealthier countries to identify six high-level, universal ethical risk themes. I then interviewed experts involved in or associated with the AI industry in South Africa and assessed how their perceptions of AI risk differed from or resonated with those themes.
The findings reflect marked similarities in AI risks between the global north and South Africa as an example of a global south nation. But there were some important differences. These reflect South Africa’s unequal society and the fact that it is on the periphery of AI development, utilisation and regulation.
Knowing what ethical risks may play out at a country level is important because it can help policymakers and organisations to adjust their risk management policies and practices accordingly.
The six universal ethical risk themes I drew from reviewing global north literature were:
Accountability: It is unclear who is accountable for the outputs of AI models and systems.
Bias: Shortcomings of algorithms, data or both entrench bias.
Transparency: AI systems operate as a “black box”. Developers and end users have a limited ability to understand or verify the output.
Autonomy: Humans lose the power to make their own decisions.
Socio-economic risks: AI may result in job losses and worsen inequality.
Maleficence: It could be used by criminals, terrorists and repressive state machinery.
Then I interviewed 16 experts involved in or associated with South Africa’s AI industry. They included academics, researchers, designers of AI-related products, and people who straddled the categories. For the most part, the six themes I’d already identified resonated with them.
But the participants also identified five ethical risks that reflected South Africa’s country-level features. These were:
Foreign data and models: Parachuting data and AI models in from elsewhere.
Data limitations: Scarcity of data sets that represent, reflect local conditions.
Exacerbating inequality: AI could deepen and entrench existing socio-economic inequalities.
Uninformed stakeholders: Most of the public and policymakers have only a crude understanding of AI.
Absence of policy and regulation: There are currently no specific legal requirements or overarching government positions on AI in South Africa.
So, what do these findings tell us?
Firstly, the universal risks are mostly technical. They are linked to the features of AI and have technical solutions. For instance, bias can be mitigated by more accurate models and comprehensive data sets.
Most of the South African-specific risks are more socio-technical, manifesting the country’s environment. An absence of policy and regulation, for example, is not an inherent feature of AI. It is a symptom of the country being on the periphery of technology development and related policy formulation.
South African organisations and policymakers should therefore not just focus on technical solutions but also closely consider AI’s socio-economic dimensions.
Secondly, the low levels of awareness among the population suggest there is little pressure on South African organisations to demonstrate a commitment to ethical AI. In contrast, organisations in the global north have to show cognisance of AI ethics, because their stakeholders are more attuned to their rights vis-à-vis digital products and services.
The South African government has also failed to give much recognition to AI’s broader impact and ethical implications. This differs even from other emerging markets such as Brazil, Egypt, India and Mauritius, which have national policies and strategies that encourage the responsible use of AI.
AI may, for now, seem far removed from South Africa’s prevailing socio-economic challenges. But it will become pervasive in the coming years. South African organisations and policymakers should proactively govern AI ethics risks.
This starts with acknowledging that AI presents threats that are distinct from those in the global north, and that need to be managed. Governing boards should add AI ethics to their agendas, and policymakers and members of governing boards should become educated on the technology.
Additionally, AI ethics risks should be added to corporate and government risk management strategies – similar to climate change, which received scant attention 15 or 20 years ago but now features prominently.
Perhaps most importantly, the government should build on the recent launch of the Artificial Intelligence Institute of South Africa, and introduce a tailored national strategy and appropriate regulation to ensure the ethical use of AI.
Emile Ormond does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Irish novelist John Boyne published his novel The Boy in the Striped Pyjamas in 2006. The protagonist, eight-year-old German boy Bruno, has no idea that his father is the kommandant of a concentration camp during the second world war. Bruno and his family are sent to live with their father as he carries out his work.
Bored and lonely, Bruno ventures out to the perimeter of the camp where he meets and befriends a young Jewish camp inmate called Shmuel, the eponymous “boy in the striped pyjamas”.
The two boys – who are of equal age but vastly different backgrounds – develop an unlikely friendship. Completely failing to comprehend the gravity of Shmuel’s situation, Bruno yearns to join and play with his friend on the other side of the wire fence.
One day, his wish is fulfilled. He dons a set of striped pyjamas that Bruno brings him and together they wander about the camp. However, during a roundup, both boys – since they are indistinguishable – are sent to the gas chambers, where they are killed. Bruno’s parents desperately search for him, grief-stricken and are inconsolable when they realise his fate.
The novel was subsequently adapted into a major film in 2008 as well as a ballet in 2016. And in January 2023, Noah Max’s opera, The Child in the Striped Pyjamas opened in London.
Despite its popularity – or maybe because of it – the novel and its adaptions have attracted controversy. Because Jew and gentile suffer the same death, it suggests no difference between the two boys and that Bruno is as much a victim as Shmuel. Focusing on the grief of Bruno’s parents generates sympathy for them rather than the faceless Jews who were murdered in their millions.
In 2020, the Auschwitz Museum tweeted that the children’s novel “should be avoided by anyone who studies or teaches about the history of the Holocaust”. Its criticisms of the novel include its portrayal of Jewish victims as one dimensional, passive and “unresisting”. It has also urged readers not to see it as “a fable”.
Writing in The Jewish Chronicle in 2022, author Keren David explained why such stories are problematic.
The centre of these romantic, sentimental stories is an obsession with Nazis. The non-Jewish reader cannot identify fully with the Jewish victim – too scary, too alien – but they do fear the element within themselves that might have become Nazis. The idea that love conquers all, that even a Nazi camp commander possesses a heart capable of love, is a deeply reassuring fantasy.
Yet, The Boy in the Striped Pyjamas and its subsequent adaptations resist and complicate stereotyping of Jews and gentiles. Shmuel and Bruno are virtually identical and interchangeable, suggesting that both can become victims if in the wrong clothing. In this instance, there is no essential difference between victim and victimiser.
This allows the reader to consider what philosopher and Holocaust survivor Hannah Arendt controversially called “the banality of evil”. This is the idea that evil is not metaphysical (existing as an idea outside of human sense perception) but more ordinary, something that we are all capable of in the wrong circumstances.
Noah Max’s 2023 operatic adaptation is deeply personal. Speaking to the Jewish Chronicle, he explained: “The music explores the destruction of humanity’s innocence by the Holocaust through a father’s inability to face the fact that his own evil actions led directly to the murder of his child.”
Max’s maternal great-grandparents, Chaim and Klara Tennenhaus, left Austria in the 1930s as the Nazis rose to power. On Boyne’s novel, Max said:
It’s very hard to convince children to read a book about something as dark and serious as the Holocaust and what I find amazing is that while not all adults get the profound symbolism of the story, kids get it. They pick up on the fact that the children have the same birthday and are the same child.
Of course, Boyne’s book contains fictionalised elements. Given that it is novel, it has to, and artistic licence should be extended to it for that reason. Works of art cannot be judged on the same terms as history books given the limitations of the former in terms of length and audience.
As an expert in the representation of the Holocaust on film, as well as someone involved in Holocaust education, I know from personal experience that it is a very tricky task to translate the magnitude of the Holocaust to a younger audience. Any device, however flawed, should be applauded for attempting to do so even if it does not fully succeed.
It is the task of the reader to go and learn more to put the novel in context by reading some of the scores of scholarship on the Holocaust, watching excellent documentaries like Shoah or the US and the Holocaust, or visiting Holocaust exhibitions like those at the Imperial War Museum in London.
For all the criticisms, and while it is not without its problems, I think The Boy in the Striped Pyjamas should be continued to be read, adapted, staged and performed.
Anything that introduces the Holocaust and its significance to audiences, even if it does not fully succeed in its artistic aims, should be welcomed – not least because the debate about the novel helps us to keep the memory of the Holocaust alive so many years later.
Nathan Abrams has received funding from a variety of charities and research councils.