Food security report’s meat recommendations ‘oversimplify the issue’

Professor Richard Tiffin, Director of the Centre for Food Security at the University of Reading discusses the House of Commons International Development Committee report on Food Security, published this week.

The House of Commons International Development Committee report on Food Security is a comprehensive and challenging attempt to highlight some of the issues which confront us with this complex problem.

The report highlights: the potential of GM to contribute to food security but recognises that care must be excercised in promoting this as a solution; the importance of agricultural research; the importance of open international markets in protecting against shocks; and that a focus on malnutrition may be as, if not more, important than that of hunger.

However, while the aim of the report is to take a global focus, it is perhaps guilty of oversimplifying the issue of meat production and coming to locally-centred, rather than global, conclusions. It notes that the rate of increase in meat consumption is unsustainable and recommends that meat should be promoted as an occasional product. By highlighting the place of animals in ensuring global food security the report is to be applauded but the reality is that this area is complex and not well understood at the moment.

It is irrefutable that demand for meat globally will grow as populations become richer.  At a local level, it might be sensible for us to reduce meat consumption and the reality is that price increases will probably lead us to do this voluntarily.  At a global level, however, it is much more important to consider how the inevitable increase in demand can be met and what its implications are for human health. We should not solely focus our attention on repelling the tide.

The report correctly states that we need to identify sustainable livestock systems, but it is not necessarily true that extensive pasture-based systems are more sustainable. For example there is evidence to suggest that more intensive feeding reduces the emission of greenhouse gases caused by livestock.  The role played by livestock in providing a route out of poverty for some of the poorest farmers should also not be overlooked.

Books are not absolutely dead things…

Dr Rebecca Bullard from the Department of English Literature asks whether the digital revolution means we no longer have a need for old books? 

Bringing old books to life

In an attack on the censorship of books, the poet John Milton declared that ‘Books are not absolutely dead things, but doe contain a potencie of life in them to be as active as that soule was whose progeny they are; nay they do preserve as in a violl the purest efficacie and extraction of that living intellect that bred them.’ Milton’s metaphor, taken from alchemy, claims that books store (like a vial, or ‘violl’) the distilled genius of their authors. These books, Milton claims, are kept alive by the elixir of their authors’ ideas. Other writers from the ‘early modern’ period (roughly, 1500-1750) were not so confident that an author’s spirit would be enough to keep a book alive. Jonathan Swift remarked, gloomily, that ‘Books, like Men their Authors, have no more than one Way of coming in to the World, but there are ten Thousand to go out of it, and return no more.’ Swift imagined books literally torn apart, their pages turned into firelighters, pie-tin liners, and toilet paper.

Research in early modern English literature has sought ways to preserve our printed heritage in ways that would have been beyond even Swift and Milton’s prodigious powers of imagination. A digital turn in humanities research has led to the creation of an array of resources designed to preserve early modern literary culture for current and future generations. Our University Library subscribes to several of these, including the extraordinary Early English Books Online (EEBO), which offers readers digital images of every page of almost every book published in the British Isles from the earliest days of printing in the late fifteenth century to the year 1700. Researchers at Reading are, themselves, responsible for such e-resources as Verse Miscellanies Online, an online edition of some of the most popular poetry anthologies of the English Renaissance, and a database of Italian Academy Libraries that allows users to enter the world of early humanist learning. Digitisation not only has the capacity to keep books alive and in the public domain but also, through functions such as word searchability and image recognition, to generate new avenues of research.

Does the digital revolution mean, then, that bytes store the ‘efficacie and extraction’ of living intellects better than books? Keeping special collections of rare books in libraries is a costly business – one that resources like EEBO might seem to make redundant. If we regard books simply as repositories of words or texts, the answer might be ‘yes’. But both Milton and Swift make it quite clear that books are more than just inert containers for the words that they contain. These authors draw attention to the life (and potential death) embodied, materially, in the physical document that is a book. Many of us in Reading’s English Department are carrying out research into early modern ‘material texts’ which demonstrates that books communicate with their readers in ways that cannot easily be captured in electronic form.

The codex – the folded series of pages that many of us think of, automatically, as the default form of the book – exists not just in two but in three dimensions. It has a depth and, therefore, a weight that eludes the virtual world of digital representation. Think about the last book that you read. You almost certainly made judgements about it based on its size and weight even before you opened the covers. And you were probably conscious of the peculiarly tangible form of narrative progress represented by turning pages. The knowledge that one is ‘104 of 286’ pages through a book – the kind of information imparted by a digital edition – cannot capture an aspect of reading material texts that is determined by the sense of touch more than abstract mental processing.

Perhaps even more significantly, digital texts give the misleading impression that the facsimile of a document that we see on our screens provides the definitive version of the text it contains. It conceals the fact that every early modern text is handmade and therefore unique. In this period, setting type, printing sheets, collation, and binding all took place in separate processes and separate places. No two texts produced in the pre-industrial era are exactly identical. Sometimes the differences can be quite radical. Authors were able to make alterations to the words of their texts (known as ‘stop-press corrections’) in the middle of the printing process. Paper and labour being expensive, no bookseller would withhold the uncorrected version from public view. Consequently, early modern texts often circulated in more than one version, even within ostensibly the same edition. When owners got their hands on books, the differences between texts could become even more striking. Early modern readers liked to scribble in books, adding notes and even blotting out words. An example of this practice can be found in images of the playwright Ben Jonson’s collected works, now held in Reading University Library’s Special Collections, which show the inky assault that one early reader made on Jonson’s plays.

Of course, the existence of digital editions does not in and of itself preclude researchers from consulting material editions of early modern texts. And digital resources like the one used to capture the images of Ben Jonson’s inked-out plays demonstrate that new technology can disseminate information about the unique characteristics of particular early modern documents. I don’t want to suggest that there’s anything wrong with the digitisation of early modern texts – far from it. I do, however, want to draw attention to the limitations of electronic facsimiles, and to the advantages of carrying out the kind of archival research that leaves the smell of centuries-old paper, ink, leather (and concomitant dirt) on one’s fingertips. It is expensive to maintain archives of rare books, to conserve and catalogue early modern print. But old books bring to life the words of Swift, Milton and other early modern authors, as well as the cultures that fostered and first read them, in ways that cannot be matched by any digital ‘violl’.

Is a minimum price for alcohol the best way to stop Britain boozing?

Professor Richard Tiffin from the Centre for Food Security ponders on whether introducing a minimum price per unit will calm our drinking culture.

Introducing a minimum price for alcohol has been back in the news, with claims that the government is planning to back-track on its commitment to introduce a minimum price of 45p per unit of alcohol.  This is perhaps not surprising since the case for its introduction has never been made with any degree of clarity and it is relatively easy to provide evidence for very mixed and perhaps undesirable impacts.  For example it is clear that the tax will have little impact on the price of alcohol sold ‘on-licence’ where the average price of alcohol is already £1.16 per unit.  It’s also unclear whether the tax will target the people who really matter.  Perhaps surprisingly, highest alcohol consumption occurs amongst people who are classified as being in ‘high managerial’ occupations and the lowest levels are amongst the unemployed.  Because the unemployed tend to consume cheap drink however, they will bear more of the burden of the minimum price along with people in the North East, single parents and the over sixties.

There is no doubt that for many individuals alcohol consumption is dangerously high, and that the trend in alcohol consumption is worrying.  Using price to tackle this is attractive because virtually everyone understands the law of demand: price up, quantity down. But the apparent simplicity of this law is deceptive: first because changes in quantity are marginal, people with high levels of consumption continue to drink a lot, just a bit less than before; and second, because people will substitute some of the reduction in more expensive spirits with wine for which the price doesn’t change.

We should recognise that drinking is a deeply cultural activity and for some it is undoubtedly a form of self-medication. Attempts to address the problem should be framed accordingly.  Like smoking, real progress will only be made when it is clear that high levels of alcohol consumption are culturally unacceptable.  Further, increasing the financial pressure on those that are least able to cope with it may worsen the problems that lead to self-medication.

There is no doubt that we should remove the ‘cheap booze’ culture from society and that doing so will save lives.  The question remains however as to whether putting the price up is too simplistic a way of addressing this problem.

Record-breaking year when drought became flood

The Met Office confirmed this morning that 2012 was the wettest year on record for England, and the second wettest ever for the UK as a whole. Dr Roger Brugge, from the University of Reading’s Department of Meteorology, analyses the weather records from the University’s own climatalogical station during 2012.

2012 was a year in which precipitation and its impacts were uppermost in the minds of most people. With 821 mm of rain falling at the University of Reading, it was the wettest year since 2000 when 852 mm fell. The only other wetter years since 1917 at the university were in 1951 (when 896 mm fell), 1927 (858 mm) and 1960 (with 841 mm).

In Reading the year began with three dry months, with May also being on the dry side. Worthy of note in March were the 23rd and 24th (when 20.1°C was reached each day) and the 28th (when 21.4°C was recorded). The latter date came close to passing the highest March temperature on record at the University in 1965, when 22.8°C was recorded. Both January and March average 1 degC above normal – they were the only months of 2012 that could be said to be much warmer than normal.

April brought the imposition of hosepipe bans – whereupon it promptly turned wet with 120 mm of rain falling in the month, making it the wettest April locally since 2000. This was followed by a dry May – the eleventh dry month since the beginning of March 2011. With more days reaching 25°C in May than in any other May over the past 50 years hopes were beginning to build of a good summer, albeit with drought restrictions.

But it was not to be. June turned wet with 123 mm of rain falling, making it the wettest June in the town since 1971 with the longest rainless spell lasting just two days. In fact all the cloud in June made it duller than March. But at least by early July all hosepipe bans had been lifted.

July was quite cool and also wetter than average although August was slightly drier than normal. But, again, the perception was of a poor, dull summer. August, despite temperatures being close to average, was the sunniest month of 2012 (with 193 hours of sunshine) – meaning that 2012 was the first year locally since 1988 in which no month recorded 200 hours of sunshine. So maybe impressions were right?

September brought close to normal rainfall amounts, but the final three months of 2012 were wet – with local flooding, especially in December as rain continued to fall on saturated ground. With both October and December recording over 100 mm of rain (with 128 mm October was the wettest month of the year) Reading experienced four months in 2012 reaching this mark – the first time this has happened for at least 95 years.

Early December brought a hint of winter when the maximum temperature on the 12th being just -1.6°C, the coldest December day since 1991.

Overall, temperatures were slightly lower than normal (by 0.2 degC) making it the coldest year since 2010 (which was 0.7 degC colder). Sunshine totals came out at just above average – largely thanks to the sunny months of March and September.

  • Highlights of the weather in 2012:
  • 821 mm of rain made it the wettest year since 2000 when 852 mm fell.
  • The only other wetter years since 1917 at the university were in 1951 (896 mm), 1927 (858 mm) and 1960 (841 mm).
  • 21.4°C on 28 March was close to the highest March temperature on record at the University (22.8°C in 1965).
  • April was the wettest April locally since 2000.
  • May was the eleventh dry month since the beginning of March 2011.
  • June was the wettest June in the town since 1971. June was duller than March this year.
  • 2012 was the first year locally since 1988 in which no month recorded 200 hours of sunshine.
  • The final three months of 2012 were wet with local flooding.
  • There were four months during 2012 when over 100 mm of rain fell, the first time this has happened locally for at least 95 years.
  • The maximum temperature of -1.6°C on 12 December made this the coldest December day since 1991.
  • Overall, temperatures were slightly lower than normal (by 0.2 degC) making it the coldest year since 2010 (which was 0.7 degC colder).
  • Sunshine totals came out at just above average – largely thanks to the sunny months of March and September.

This summary of the weather of 2012, produced by Roger Brugge and Mike Stroud, is based on daily observations made at the University of Reading climatological station. For more details on the observations of 2012 contact r.brugge@reading.ac.uk.

Measuring the social sustainability of new housing developments

 

Social sustainability of housing developments (Image provided by Berkeley Group)

Social sustainability of housing developments (Image provided by Berkeley Group)

Professor Tim Dixon from the School of Construction Management and Engineering discusses the importance of investigating the social sustainability of housing developments.In an era dominated by climate change debate and environmentalism there is a real danger that the important ‘social’ pillar of sustainability drops out of our vocabulary. This can happen at a variety of scales from business level through to building and neighbourhood level regeneration and development. Social sustainability should be at the heart of all housing and mixed-use development but for a variety of reasons tends to be frequently underplayed, and the English city riots last year brought this point back sharply into focus. The relationships between people, places and the local economy all matter and this is as true today as it was in the late 19th century when Patrick Geddes, the great pioneering town planner and ecologist, wrote of ‘place-work-folk’.

In the current recession, where house-building has fallen to an all-time low in the UK, it is therefore not only important that we build more homes in the right place but that those homes link and integrate with existing communities. Two key questions stem from this: what exactly is social sustainability and how do we measure it?

One way in which social sustainability can be understood is in terms of an outcome of place-making, or designing places that are attractive to live in. So social sustainability can be seen as being about people’s quality of life now and in the future, and describes the extent to which a neighbourhood supports individual and collective well-being. Social sustainability therefore combines design of the physical environment with a focus on how the people who live in and use a space relate to each other and function as a community. It is enhanced by development which provides the right infrastructure to support a strong social and cultural life, opportunities for people to get involved, and scope for the place and the community to evolve.

However house builders have historically shied away from confronting how to measure what is seen by many as a ‘slippery’ concept.  Despite this, the Berkeley Group recently commissioned research to assess and measure the social sustainability of four of its housing developments using an independently developed framework consisting of three dimensions: ‘infrastructure and social amenities’, ‘voice and influence’ and ‘social and cultural life’, which are underpinned by 13 indicators. Data from 45 questions tied into national datasets were used to underpin the indicators, and primary data was collected from face to face surveys and a site survey on the housing developments.

The research is an honest and independent appraisal of one house builders’ new housing developments. In the four developments (which were all in London and the south east) the research found that whilst people in the developments felt they belonged to the community, talked to neighbours regularly and planned to stay in the community, there were also negative feelings about feeling less like they played an important part in things and are less likely to pull together to improve the neighbourhood. Overall though, the developments scored well on well-being and safety compared with comparable places and national benchmarks.

Recent changes to the National Planning Policy Framework, and the emergence of localism and well-being agendas have started to move social sustainability centre stage, and the next phase of this research is set to examine how new developments impact directly on the communities in which they are located. Long-term stewardship of housing developments is therefore likely to become increasingly important.

 Tim Dixon is professor of sustainable futures in the built environment in the School of Construction Management and Engineering. He also leads the new University of Reading’s Sustainability in the Built Environment (SustBE) research programme and is a research associate of the Walker Institute. His personal research revolves around the interface between the sustainability agenda and its impact on property development, investment and occupation. The research is based on a strong interdisciplinary approach which incorporates policy and practice impacts and futures thinking. The research (carried out by Social Life and University of Reading) on which this blog is based has recently been published by Berkeley Group in their report, ‘Creating Strong Communities’ (www.berkeleygroup.co.uk/sustainability/socialsustainability).

Why has Obama failed to knock ‘47’ bells out of Romney?

Dr Jonathan Bell from the Department of History wonders why the race to the White House is still too close to call…

‘There are 47 percent of the people who will vote for the president no matter what,’ commented Mitt Romney in a video clip recorded at a fundraiser in September.

This was one of the most controversial moments in the current US presidential campaign so far and, despite the fact that Romney recently disavowed the statement and impressed in the first tv debate, should have done irreparable damage to the challenger’s campaign.

With Romney’s further comments that ‘forty-seven percent of Americans pay no income tax,’ and that his role ‘is not to worry about those people. I’ll never convince them they should take personal responsibility and care for their lives,’ it seems staggering that with under two weeks left until the election, the race to the White House is still too close to call.

So what’s Obama done wrong?

Romney’s attempt in the video to lay out in sharp relief the fundamental gulf between his political worldview and that of President Obama, reveals much about the highly charged debate raging in the United States over the future direction of the country. The fact that in making such a statement Romney did not deliver a self knockout blow demonstrates how powerful notions of individualism and freedom from state control remain despite the economic cataclysm of 2008.

The 2012 election is a referendum on how far government can be trusted to nurture a fairer society, a debate that has been raging for at least a hundred years but which has now reached fever pitch.

In response to a similarly devastating economic depression in the 1930s President Roosevelt set up the modern American social welfare system with the Social Security Act, and won a second term by a landslide despite failing to end the Great Depression. That welfare system expanded gradually through the 1950s and 1960s, notably with the creation of Medicare and Medicaid, and Republican Ronald Reagan was unable to curtail government programmes as much as he wanted.

Yet America’s relationship with Big Government remained uneasy and tentative, and notions of the ‘undeserving poor’ were powerful weapons in the Republican arsenal when they took control of Congress in the 1990s. The economic crisis of recent years has only heightened conservative attacks on government, as they see attempts to spend stimulus money or make the state an arbiter of health care provision as reckless extravagant schemes that risk wrecking the US economy on the shoals of fiscal ruin.

But if FDR was able to win a second term despite opponents’ cries of socialism and in spite of the lingering misery of economic crisis, why is Obama having such difficulty painting Romney as uncaring and aloof, happy to cut taxes on the rich while millions of Americans suffer unemployment and job insecurity?

The problem lies in the very success of the New Deal mission over the last eighty years. Most Americans benefit from government aid: through a bewildering array of tax breaks, old age pensions, health care for the elderly and for a majority of mothers and children, farm subsidies, to name a few examples.

But the majority don’t consider themselves part of Romney’s 47 percent. They don’t see their own tax breaks or health care as a handout, but are susceptible to appeals that Democrats want to extend the social safety net to the underserving.

Obama has failed to get across the message that Romney was wrong not because the 47% figure was too high, but because it was too low: corporate America also depends on government, the middle class depend on government, and the poor in many ways get the least from government.

The persistence of the myth, a myth Obama himself cannot quite dare to explode,  that all but undeserving shirkers gain economic security purely from their own efforts, is making it easier to attack Obama than it should be and could lead him passing on the Oval Office keys to Governor Romney.

Dr Jonathan Bell is Head of the Department of History and is interested in the political history of the United States since the Great Depression, in particular the relationship between political ideas and social change. He works mainly on the politics of the post-1945 era, looking at ways in which the theory and practice of liberalism and conservatism changed in the three decades since 1929.

How family trees can help predict which plants could provide new cures

Dr Julie Hawkins from the School of Biological Sciences discusses recent research which aims to revitalise the hunt for new plant-based medicines.

The earliest medicines were derived from plants, and the first doctors were trained in botany. Today, many societies rely directly on plants and plant knowledge for the health of their people, and a large proportion of pharmaceutical medicines are derived from plants. These pharmaceuticals are often used in ways which reflect traditional use of the species from which they have been derived.

Despite the importance of plants to health, however, there is some controversy as to whether new drugs could be derived from them. Recent developments in drug discovery have made use of robotic screening of compound libraries, whist bioprospecting based on traditional knowledge of plant use has fallen out of favour. Some have argued that useful pharmaceuticals are unlikely to be discovered amongst the ‘riches of the rainforests’. There are issues surrounding recognising the intellectual property of the people discovering and using these plants, which is seen as politically complex, and this discourages investment. In addition, collecting traditionally used plants for screening is time consuming and relies on expertise in botany and ethnobotany. Furthermore, some have argued that plant use may not be indicative of bioactivity, so screening plants used by traditional healers may not yield valuable insights.

Research I have been involved in has recently looked at a novel way of evaluating plants used by traditional healers to address this. We considered the phylogenies, or ‘family trees’ of the plants found in three global biodiversity hotspots. By using DNA sequences to work out how plants in these regions were related, we were able to see whether plants usedby traditonal healers in different regions were closely related to each other. The geographic regions we selected for the study were ones unlikely to have exchanged knowledge about traditional plants – Nepal, New Zealand and South Africa. We found that in these regions the same closely-related plants were used by traditional healers, and interestingly were used to treat the same conditions. 

The fact that evolutionarily related plants are used in different regions, even though the same species are not present, strongly suggests an independent discovery of plants which share the same or similar health properties.  This new finding could revitalise the search for valuable plant medicines. Targeted screening of plants with a high potential for having health benefits would reduce the time investment in collecting species, and also make it easier to negotiate fair and equitable distribution of benefits with the originators of the knowledge.

Dr Julie Hawkins works in the School of Biological Sciences and is interested in the application of molecular marker data to determine identity, parentage and provenance of economically important plant species. This research has recently been published in PNAS: Haris Saslis-Lagoudakis, Vincent Savolainen, Elizabeth M. Williamson, Félix Forest, Steven J. Wagstaff, Sushim R. Baral, Mark F. Watson, Colin A. Pendry & Julie A. Hawkins. ‘Phylogenies reveal predictive power of traditional medicine in bioprospecting’ PNAS (2012). doi:10.1073/pnas.1202242109.

 

Combining the old with the new is the recipe for global food security

On world food day Professor Richard Tiffin from the University’s Centre for Food Security discusses the challenges faced when meeting the global demand for food.

The present debate around how best to meet the global demand for food has a tendency to polarise into two camps.  First there are those who argue that the food system is broken and what is required is a return to more ‘traditional’ ways in which food is produced on labour intensive small farms and distributed locally.  In the opposite corner are technologists who argue that the only way that we will be able to meet the predicted increase in demand for food of between fifty and one hundred per cent, is to continue the process of intensification that characterised the development of agriculture during the twentieth century.  Instead of this polarisation however perhaps some cross fertilisation is necessary.

A return to a more traditional agriculture has some appeal.  There is no denying that small scale production gives a better looking countryside and increased rural employment.  Its diversified products also provide a nice contrast to the commoditised food products that dominate the supermarket shelves.  In a more subtle way the greater diversity of the farming system employed on these units may provide us with a greater degree of resilience in the face of increased risk of extreme weather events which climate change brings.  

This is all logical but the problems arise when attempts are made to scale the approach up to meet a much larger part of our food needs.  Increased labour intensity demands more labour and we have to ask where this will come from.  ‘New-lifers’ can only go so far, farms will need to reverse the reality of the labour market in which non-agricultural jobs have better conditions and therefore draw people out of the sector.  It is sometimes overlooked that farm employment is often dangerous, cold, wet, depressing and poorly paid. 

The argument becomes much more dangerous, however, when we apply it to developing countries.  Here the small scale sector is often vital in ensuring short term food security, but to argue that it should remain so risks consigning these countries to a permanently less developed state.  The process of agricultural intensification must be seen as one component of the process of economic development.  Blocking agricultural development will stop the release of labour (and other resources) from agriculture which drives growth in other sectors of the economy.  Without this, growing populations may or may not have enough food, but they will be without the services that are necessary to support their inevitably more urban lifestyles.

So, we are left with a situation in which ‘intensification’ must continue, but we must also learn from the practitioners of ‘traditional’ agriculture.  These farmers are acutely aware of the fact that food production is not an industrial process.  Food is, at least in part, a product of nature.  This is a fact that seems not have escaped the food consumer, where all the evidence points to the fact that ‘natural’ food is valued.  The implication is that we cannot divorce our food production from the ecosystem which supports it.  Changes in our farming system have implications for the other things which our ecosystem gives us, for example biodiversity and carbon cycling.  Equally changes in the ecosystem, for example reductions in the population of pollinators, have implications for food production.

There are some encouraging signs that a middle way may become our focus.

The concept of sustainable intensification is on the agenda.  This recognises that we must not stop the search for new ways of producing food but that we should do so in ways which work with nature rather than in a box apart from it.  We should learn from our traditions but not harp back to them.  By 2050 there will be 2bn more people in the world, 1.9bn of whom will be in developing countries.  We owe it to them.

Professor Richard Tiffin is Director for the Centre for Food Security at the University of Reading. Richard is an Agricultural Economist and his current research is focused on diet and health policy, in particular the impacts of fiscal policies with the objective of improving dietary health, such as so-called ‘fat tax’.

 

Fate of pre-nups and divorce matters to more than just celebs

As the Law Commission publishes its Supplementary Consultation Paper on the financial consequences of divorce, Dr Thérèse Callus, from the University of Reading’s School of Law, argues that the present law on divorce could be said to be uncertain, outdated and inappropriate to today’s couples.

The fate of pre-nups and divorce matters to more than just celebs

A Law Commission Consultation in 2011 considered whether ‘pre-nups’ – pre-nuptial agreements between spouses, as favoured by celebrities and the super-rich – should receive greater weight in English law. While pre-nups tend to be of only minimal interest to the average couple, their status in English law says a lot about how law regulates the consequences of marriage and civil partnership for us all. (The discussion that follows applies equally to civil partners insofar as the law relating to the dissolution of a civil partnership mirrors that applicable to spouses upon divorce). It is for this reason that the Law Commission issued the Supplementary Consultation Paper  ‘Marital Agreements, Needs and Property’ on 11th September 2012.

London is termed the ‘divorce capital of the world’ yet many argue that the law on divorce settlements is uncertain, dated and incompatible with today’s society. Wealthy individuals claim the right to draw up pre-nuptial contracts in order to protect ‘their’ wealth from being divided with an ex-spouse upon divorce. Yet these ‘contracts’ are not legally binding in law. They are simply one of the considerations that a court will take into account when deciding what a ‘fair’ outcome will be for the parties, through the application of an Act of Parliament from 1973.

In the English common law system, there is no equivalent to the continental matrimonial property regimes or contracts which come into effect upon marriage and which provide for property distribution when the marriage is dissolved through either divorce or death. Consequently, in England & Wales, what is gained from a bespoke case-by-case result is lost in the lack of certainty, both for practitioners advising their clients, and for couples about to embark upon marriage but who are concerned to maintain autonomy over their financial situation.

The law is thus unpredictable and unclear.  The Law Commission consultation on pre-nups in 2011 posed different options for reform and in so doing highlighted the controversial and ambiguous notions of ‘non-matrimonial property’ and the ‘needs’ of each spouse following divorce.  It is these wider issues that are of most interest to all couples and the subject of the Supplementary Consultation Paper.

In making an order, the English courts have always concentrated on the ‘needs’ of the claimant spouse. It is obvious that within most marriages, one or other of the spouses may make career – and therefore economic – sacrifices for the benefit of the family, and often to undertake child care. This in turn allows one spouse to continue with their career and earning progression. Upon divorce, the economically weaker spouse finds herself (or himself) at a disadvantage because of the respective contributions during the marriage. Is this merely to be seen as a matter of choice? Or should the law require that the spouse who maintains their earning capacity ‘compensate’ the other for their economic dependence? What of the pre-nuptial contract which stated that no claim should be made on each other? And what about pre-acquired wealth? For example, how should the law separate Paul McCartney’s wealth accumulated during his music career which preceded his marriage to Heather Mills compared to the family business in existence prior to the marriage, but which has greatly increased in value during the marriage, in part because the non-business spouse has looked after the family? These are some of the questions which need to be addressed through the current consultations.

Of course, more than the legal issues at stake are the political risks – no Government wants to be seen to be ‘undermining marriage’ but at the same time, claims for respect for individual autonomy and choice are strong. Yet choice is only real when parties know both the options and their consequences. The Supplementary Consultation Paper thus also highlights the importance of educating couples as to the legal implications of their actions.

The debate is now open not only on how finances should be shared upon divorce and dissolution of civil partnership, but indeed the rationale behind why the law should dictate how parties wish to manage their lives.  In limiting the consultation to marital property, agreements, and needs, the Government has indicated it is not willing to consider wholesale reform.  Although the Law Commission has realistically noted that such fundamental reform cannot be achieved in this current round, it is clear that the stakes for society are such that it is only a matter of time (and political will) before any Government is forced to grasp the nettle.

Dr Thérèse Callus is Senior Lecturer in Law and a member of the Advisory Board to the Matrimonial Property, Needs and Agreements project to the Law Commission. The opinions expressed in this piece are her own.

The fading voices from Reading’s past

Professor Peter Kruschwitz, from the Department of Classics at the University of Reading, is a Latin scholar with a long-established specialism in Latin inscriptions. One of his current projects aims to collect, edit, translate, and explain the Latin inscriptions, ancient, mediaeval, and modern, from Reading. The Latin term monumentum, from which the English ‘monument’ is derived, is related to the verb monere, ‘to remind’. Monuments are thus tangible, visible manifestations of human memory. Often monuments are inscribed, to aid the memory of those who want to remember, and that of future generations, as to why a certain monument was erected – be it specific to a person, an event, or gnomic, i. e. in the form of some general wisdom. Monuments and inscriptions form a unit that cannot be dissolved, and it is the conjunction of the two that makes for a particularly powerful tool. But what if the language of the inscription cannot be understood by (most or at least some) of the public that it addresses? What happens if neglect allows the monument itself to decay? One could argue that in those cases memory will fade with the monument itself, and neglect will turn into ignorance. The Latin inscriptions from Reading, whether ancient, mediaeval, or modern, provide numerous telling examples for this view. Times are a-changin’Inscription on HSBC building

Vis unita fortior.

Meaning:

“A force united is stronger.”

Why exactly is this on display in Reading’s Broad Street? Older residents of Reading may remember that the building was not always that of HSBC, but was once the location of the Midland Bank. This bank was acquired by HSBC in 1992, and it was rebranded in 1999. The crest, with its freemasonic elements, and the Latin motto was that of the Midland Bank, originally founded in 1836.

It is beautifully ironic in a way that the Latin inscription and the crest alone survive, with a Latin motto that in hindsight seems to suggest that, even for banks, only joining forces is hope for survival.

Voices muffled and vanishing

After just twenty years, it has become difficult to understand the relationship between the Midland Bank motto and its current location. It may also be difficult or in fact impossible for many passers-by to grasp the meaning of the text, even though Latin mottos are, in fact, still frequent in Britain.

Inscription for Edward Dalby

But what if the inscription is closer to 300 years old, substantially longer, yet exclusively written in Latin, and it is not even on display in a prominent spot? The answer should be obvious, and it is depressingly easy to illustrate the point with an example. The following stone now lies in the graveyard of Saint Laurence, just north of the church building. This is the monument of Edward Dalby, once upon a time the Recorder of Reading. It reads thus:

 

Spe resurgendi

Hic prope depositi sunt cineres Edwardi Dalby,

ar(miger) qui obiit 30 Martii, Anno D(omi)ni 1672,

aetatis 56.

Et Franciscae uxoris ejus, filiae superstitis et her(e)dis

Caroli Holloway, ar(miger), servientis ad legem:

Haec obiit 17 Augusti, anno D(omi)ni 1717,

aetatis 90.

Et Elizabethae, filiae eorundem, quae obiit

8 Februarii, anno D(omi)ni 1686, aetatis 23.

“In the hope of resurrection, here are deposited the ashes of Edward Dalby, armiger, who died 30th of March, A.D. 1672, aged 56. Also of Frances, his wife, surviving daughter and heir of Charles Holloway, serjeant-at-law: she died on 17th of August, A.D. 1717, aged 90. Also of Elizabeth, their daughter, who died on 8th of February, A.D. 1686, aged 23.”

The stone, embellished with a significant coat of arms at its top, now exposed to weather, filth, and vegetation, records the life of one of Reading’s foremost dignitaries at the time as well as those of his family: Edward Dalby of the Inner Temple. Henry Hyde, Earl of Clarendon, described him as a man ‘of eminent loyalty and as wise a man as I have known of his rank’. He married Frances, daughter of Charles Holloway, also a lawyer. Dalby became Recorder (or High Steward) of Reading in 1669, replacing Daniel Blagrave, who had to flee the country for his involvement in regicide (as one of the signatories of King Charles I’s death warrant). Their daughter died young. Their son John, not mentioned in this text, continued the legacy of the Dalby family after his father’s death. Only a small street, Dalby Close in Hurst, Wokingham, still preserves the family’s name today.

A small step towards regaining collective memory?

There are well over 100 Latin inscriptions, from ancient to modern, in Reading – some of them well-known to everyone (like the one on the pedestal of the statue of Queen Victoria on Blagrave Street), others rather more cryptic and hidden away. Many of them have fascinating stories to tell, and they all add to the jigsaw puzzle that is the history of Reading. For the Latinist, they are also invaluable documents to understand the spread, role, and legacy of Latin in modern times. It is high time to collect these texts and to make them available to the public, in translation and with appropriate amounts of documentation, so that fading memory, in conjunction with a language barrier, will not soon turn into complete and utter oblivion.

http://www.reading.ac.uk/classics