Tuesday, May 06, 2014

What it means to be part of the Unwritten Constitution

The following is the third and final assignment submitted in the Coursera Constitutional Law class taught by Professor Akhil Reed Amar of Yale University.

In the 1819 Supreme Court case McCulloch v Maryland, Chief Justice John Marshall wrote that “[the Constitution’s] nature ...requires that only its great outlines should be marked, its important objects designated, and the minor ingredients which compose those objects be deduced from the nature of the objects themselves...We must never forget that it is a Constitution we are expounding.” This process of deducing and expounding has been guided by a set of supplementary texts, practices, and principles which have become part of an unwritten Constitution. But how does something become part of this tradition? To answer this question we must not only look at the existing components of the unwritten Constitution, we also have to understand the historical context in which they emerged.

A primary component of this tradition is the Declaration of Independence. Its themes of self-governance, equality, and individual liberty guide us in interpreting the terse text of the 1787 document. Looking at the Constitution in the context of the Declaration, we can better understand and interpret key parts of the Bill of Rights. For instance, the Third Amendment's protection against the quartering of soldiers in private homes is directly related to a grievance in the Declaration. Also, interpreting the Ninth Amendment’s vague language of “rights retained by the people” is more easily understood in this broader context. So, the Declaration of Independence provides the philosophical and historical foundations for interpreting the written Constitution.

With the philosophical foundations put in place, we can then turn to the process of ordainment and ratification as a second tool for expounding the written text. This process included a national discussion which saw the document come to life in debates across the states. The Constitution itself was printed in papers for all to read and discuss, but so were letters both in favor of and against ratification. This experience strengthened the bonds of a participatory democracy and showed the importance of both free speech and a free press.

While the bonds of democracy were strengthened during the nationwide debate, the democratic legitimacy of the Constitution was strengthened by the inclusiveness of its ratification process. When it was time for the people to select their convention delegates, states lowered or completely dropped property qualifications for voting - a radical concept for the time.

Both of these ideas - open debate and more inclusive voting - give us a context for better understanding the birth of the Constitution. They also start a tradition that is more fully realized over time under the lived Constitution.

Subsequent generations have added their voice to the unwritten Constitution through a set of emerging practices, or what Professor Akhil Reed Amar calls “hearing the people.” For instance, certain legal practices that are currently taken for granted were not common during the founding generation. Concepts such as testifying in court on one’s own behalf, proof beyond a reasonable doubt, and providing evidence for one’s defense have emerged over the past two centuries. This “lived Constitution” as Amar calls it, has given those generations that have followed the founders, the ability to mold the legal and political culture under which they have lived.

So, for something to be part of America’s unwritten Constitution, it must be part of an emerging and evolving legal, political, and cultural landscape. In short, it means being part of the process of expounding the written Constitution.

Tuesday, April 15, 2014

Constitutional Textualism

The following is an essay submitted for an assignment in the Coursera Constitutional Law class taught by Professor Akhil Reed Amar of Yale University.


Some constitutional scholars, who call themselves textualists, say that the only source of meaning in constitutional law should be the text of the Constitution itself. While being faithful to the text is important, there are instances that require us to look beyond strict textualism.


We can take the example of the Vice President and the question of who shall preside over his impeachment. A strict reading of the Constitution would indicate that the Vice President himself would preside. After all, he is the presiding officer of the Senate (where impeachment trials are held) and the text of the Constitution only allows for an exception to this rule when the President is impeached. Is it absurd to think that we would allow a man to preside over his own trial? It would seem so, but only by looking outside of the literal text can that absurdity be understood.

To look outside, however, we must first start by looking at the document itself more holistically. For instance, the preamble states that the Constitution was ordained, in part, to “establish Justice.” Is it just for a man to preside over his own trial? James Madison in Federalist #10 points out that “no man is allowed to be a judge in his own cause, because his interest would certainly bias his judgment, and, not improbably, corrupt his integrity.” So, we can identify an obvious error in the document only by understanding the legal culture in which it was drafted.

To rectify this particular error, however, we can look to the text itself, but only by “reading between the lines,” as Professor Akhil Reed Amar would put it. Specifically, the Senate, with the power to “determine the Rules of its Proceedings,” could easily appoint some other party to preside over the Vice President’s impeachment.

Other than addressing obvious holes in the document, there are specific areas in the text that require interpretation beyond the literal reading. For instance, the Ninth Amendment states that “the enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.” This implies that the people have rights prior to the establishment of the Constitution. But how are we to determine these rights?

To address this problem, Professor Amar suggests that we take into account how Americans have “lived their lives in ordinary ways.” Put more broadly, we must look at the “lived Constitution.”

An example of the lived Constitution is the emergent norm of defendants testifying on their own behalf at their trial – something that was not allowed at the time the Constitution was written. Over time, however, this practice has become so commonplace that to strike it down as being unconstitutional would seem unjust.

We also have to look to a lived Constitution if we expect to address legal questions that involve modern technology. For instance, is a person’s cellphone protected from an unreasonable search under the 4th Amendment? (See U.S. v. Wurie.) Such devices today can contain a wealth of information about our lives, including financial information, personal contacts, schedules, etc. It would seem that they would be protected as personal papers and effects.

However, in Chimel v. California (1969) the Supreme Court ruled that police can search the immediate vicinity of a subject. Does this include a cellphone a subject may be carrying? If so, how much of the cellphone can be searched before such a search is considered unreasonable? Such questions need to be answered by interpreting not only the textual but also the lived Constitution.


An argument could be made that the issues listed here could be addressed by the amendment process of the Constitution. This, by its very nature, would be a more textualist approach. However, such an approach would, as John Marshall put it, “partake of the prolixity of a legal code,” and turn the Constitution into a document that “could scarcely be embraced by the human mind.” A better approach would seem to be to apply various principles and emergent norms to a more consistent, terse text. As Professor Amar would say, these principles and norms would help supplement, not supplant the text of the Constitution.

Monday, March 03, 2014

The Democratic Nature of the U.S. Constitution

The following is an essay submitted for an assignment in the Coursera Constitutional Law class taught by Professor Akhil Reed Amar of Yale University.


The U.S. Constitution opens with the statement “We the People,” announcing the most “democratic deed” in history. The democratic nature of the document is embodied in several aspects, the first of which was the ratification process.


While the document itself only required “Unanimous Consent” by state conventions, 8 of the 13 states lowered property restrictions to allow more people to vote for ratification convention delegates. This allowed for broader participation amongst the people. No other great democracy in history had allowed for such a broad level of participation in their constitution's ratification process. For instance, the Articles of Confederation, the governing document of the states prior to the Constitution, was sent out to be ratified strictly by the state legislatures - no special consideration was given to the citizens of the states. Also, the English constitution, such as it was, was never reduced to a single document on which the British people could vote. Only Massachusetts and New Hampshire had put their state constitutions to a vote by the people of the various townships (examples which set the stage for the U.S. Constitution).


There are also various provisions within the document itself that demonstrate democratic values. For instance, members of one house of the bicameral legislature, the House of Representatives, are elected biennially “by the People of the several States.” The Articles of Confederation, by contrast, had only one house and its members were chosen by the various state legislatures (except for Connecticut and Rhode Island where voters could weigh in on the selection of delegates).


The Constitution also requires the House of Representatives to change in size (initially) and apportionment based upon population growth. This would mean that the lower house would continually reflect the changing demographic shape of the people. This was not, however, a feature of the Articles of Confederation which limited states to between 2 and 7 delegates for its congress, the actual number being determined by the state legislature not by the size of the state's population. (The restriction on the ability of colonial assemblies to adjust in size based upon population shifts was actually one of the grievances against the English Crown listed in the Declaration of Independence.)
Other provisions of the Constitution that reflect its democratic nature concern qualifications for office. For one, no person can be excluded from office based upon his or her religious beliefs. As stated in Article VI, “no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States.” In 1787, many (if not all) state governments required office holders to be Christians.


The Constitution does, however, put age restrictions on those seeking to hold office. House members must be at least 25 years old, senators must be at least 30, and the president must be at least 35. While this requirement seems like it restricts participation, it can be viewed as an egalitarian feature.


In many Old World societies the young scions of aristocracy would have an advantage from an early age because of their family heritage. Putting a minimum age requirement on office seekers levels the playing field for those without such a hereditary advantage by giving them time to make their own mark in society.


One final democratic feature of the Constitution is actually something that does not appear in the document itself. That is, there are no property qualifications for office holders. Men of little or no property could now hold any office within the government, something that was uncommon at the time of the Constitution’s ratification. For instance, the English House of Commons was made up of men of vast estates, and even the old Congress under the Articles of Confederation had members whose states imposed property qualifications on delegate selection.


While the Constitution’s democratic nature was unique at the time of its ratification, it must not be forgotten that many in society were still excluded from participation in the electoral and governing processes. Over time, however, the document has been expanded to better reflect the notion of “We the People.”

Monday, August 30, 2010

NFJS 2010 in Raleigh

I attended the No Fluff Just Stuff tour in Raleigh this past weekend with a bunch of others from Railinc. After the event on Sunday I tweeted that I wasn’t all that impressed with this year’s sessions. Matthew McCullough responded asking for some details on my concerns. Instead of being terse in a tweet, I thought I’d be fair with a more lengthy response.

First off, I really wasn’t planning on going this year. I took a look at the sessions and didn’t see enough relevant content that interested me. In 2007 and 2008 I went with a group from Railinc and had a pretty good time while learning about some new things that were going on in the industry. (We didn’t go in 2009 for economic reasons.) This year, however, I felt that there wasn’t enough new content on the schedule that interested me. (I have seen/read/heard enough about REST, Grails, JRuby, Scala, etc.).

What changed my mind about going was the interest expressed by some other developers at Railinc. Since I coordinated the 2007 and 2008 trips, I thought I’d get this one coordinated, and since there was a good amount of interest, I figured I’d give it a shot as well. So, to be fair, I wasn’t going in expecting much anyway.

Here were the key issues for me:
  1. Some of the sessions did not go in the direction that I expected. To be fair, though, I was warned ahead of time to review the slides before making a decision on a session. The problem here is that some presenters relied more on demos and less on slides, so in some cases it was hard to judge by just the slide deck.
  2. Like I said above, I wasn’t planning on going in the first place because of the dearth of sessions that seemed interesting to me. I ended up going to some sessions because it was the least non-relevant session at that time. There were actually two sessions that I bailed on in the middle because I wasn’t getting any value from them.
  3. Finally, and this is completely subjective, some of the speakers just didn't do it for me. While you could tell that most (if not all) of the speakers were passionate about what they were talking about, some were just annoying about it. For instance, some of the attendees I spoke to felt that the git snobbery was a bit overkill. Some of it was just speaker style - some click with me some don't.
Some things I heard from the other Railinc attendees were
  • Too much duplication across speakers
  • Not enough detail along tracks
  • Some of the session were too introductory - could have gotten same information from a bit of googling.
Granted, some of my concerns are subjective and specific to my own oddities. But I do remember that I had enjoyed the '07 and '08 events much more.

I did, however, enjoy Matthew's first session on Hadoop. I knew very little about the technology going in and Matthew helped crystallize some things for me. I also got some good information from Neal Ford's talks on Agile engineering practices and testing the entire stack.

I really like the No Fluff Just Stuff concept in general. I think it is an important event in the technology industry. The speakers are knowledgeable and passionate which is great to see. My mind is still open about going next year, but it will be a harder sell.

Wednesday, August 25, 2010

Not so Stimulating

I sent the following to the Raleigh News & Observer:
E. Wayne Stewart says that “enormous fiscal stimulus ... to finance World War II led the U.S. out of the Depression.” While it is true that aggregate economic indicators (e.g., unemployment and GDP) improved during the war, it was not a time of economic prosperity.

During World War II the U.S. produced a lot of war material, not consumer goods. It was a time when citizens went without many goods and raw materials due to war-time rationing. It was also a time when wages and prices were set by government planning boards. In short, it was a time of economic privation for the general public. It wasn't until after the war, when spending was drastically reduced, that the economy returned to a sense of normalcy.

The lesson we should learn is that, yes, it is possible for government to spend enough money to improve aggregate economic indicators. That same spending, however, can distort the fundamentals of the economic structure in ways that are not wealth-producing as determined by consumer preferences.
This argument, that government spending during WWII got us out of the Depression, is used by many to justify economic stimulus. The argument I use above comes from Robert Higgs and his analysis of the economy during the Depression and WWII.

For me, though, the biggest problem with the "just spend" argument is that it ignores the nuances and subtly of a market-based, consumer-driven economy. It is like saying that to get a 1000 word essay to a 2000 word essay all you need to do is add 1000 words. There is no thought into the idea that those extra words need to fit into the overall essay in a coherent manner. A productive economy needs spending to occur in the proper places at the proper times, and it is the market process that does this most efficiently (not completely efficiently, but better than the alternatives).

Prediction Markets at a Small Company

Railinc has recently started a prediction market venture using Inkling software. We have been using it internally to predict various events including monthly revenue projections and rail industry traffic volume. In July, we also had markets to predict World Cup results. While this experience has been fun and interesting, I can't claim it has been a success.

The biggest problem we've had is with participation. There is a core but small group of people who participate regularly, while most of the company hasn't even asked for an account to access the software. When I first suggested this venture I was skeptical that it would work at such a small company (just under 200 staff) primarily because of this problem. From the research I saw, other companies using prediction markets only had a small percentage of employees participate as well. However, those companies were much larger than Railinc, so the total number participating was much greater.

Another problem that is related to participation is the number of questions being asked. Since we officially started this venture I've proposed all but one of the questions/markets. While I know a lot about the company, I don't know everything that is needed to make important business decisions. Which brings up another problem - in such a small company do you really need such a unique mechanism to gather actionable information from such a limited collective?

Even considering these problems we venture forward and look for ways to make prediction markets relevant at Railinc. One way to do this is through a contest. Starting on September 1 we will have a contest to determine the best predictor. At the Railinc holiday party in December we will give an award to the person with the largest portfolio as calculated by Inkling. (The award will be similar to door prizes we've given out at past holiday parties.) I've spent some time recently with the CIO of Railinc to discuss some possible questions we can ask during this contest. We came up with several categories of questions including financial, headcount, project statistics, and sales. While I am still somewhat skeptical, we will see how it plays out.

We are also looking to work with industry economists to see if Railinc could possibly host an industry prediction market. This area could be a bit more interesting, in part, because of the potential size of the population. If we can get just a small percentage of the rail industry participating in prediction markets we could tap into a sizable collective.

Over the coming months we'll learn a lot about the viability of prediction markets at Railinc. Even if the venture fails internally, my hope is to make some progress with the rail industry.

Thursday, August 12, 2010

Geospatial Analytics using Teradata: Part II - Railinc Source Systems

[This is Part II in a series of posts on Railinc's venture into geospatial analytics. See Part I.]

Before getting into details of the various analytics that Railinc is working on, I need to explain the source data behind these analytics. Ralinc is basically a large data store of information that is received from various parties in the North American rail industry. We process, distribute, and store large volumes of data on a daily basis. Roughly 3 million messages are received from the industry which can translate into 9 million records to process daily. The data is categorized in four ways:
  • Asset - rail cars and all attributes for those rail cars
  • Asset health - damage and repair information for assets
  • Movement - location and logistic information for assets
  • Industry reference - supporting data for assets including stations, commodities, routes, etc.
Assets (rail cars) are at the center of almost all of Railinc's applications. We keep the inventory of the nearly 2 million rail cars in North America. For the most part, data we receive either has an asset component or in some way supports asset-based applications. The analytics that we are currently creating from this data falls into three main categories: 1) logistics, 2) management/utilization, and 3) health.

Logistics is an easy one because movement information encompasses the bulk of the data we receive on a daily basis. If there is a question about the last reported location of a rail car, we can answer it. The key there is "last reported location." Currently we receive notifications from the industry whenever a predefined event occurs. These events tend to occur at particular locations (e.g., stations). In between those locations is a black hole for us. At least for now, that is. More and more rail cars are being equipped with GPS devices that can pin point a car's exact location at any point in time. We are now working with the industry to start receiving such data to fill in that black hole.

Management/utilization requires more information than just location, however. If a car is currently moving and is loaded with a commodity then it is making money for its owner; if it is sitting empty somewhere then it is not. Using information provided by Railinc, car owners and the industry as a whole can get a better view into how fleets of cars are being used.

Finally, asset health analytics provide another dimension into the view of the North American fleet. Railinc, through its sister organization TTCI, has access to events recorded by track-side detectors. These devices can detect, among others, wheel problems at speed. TTCI performs some initial analysis on these events before forwarding them on to Railinc which then creates alert messages that are sent to subscribers. Railinc will also collect data on repairs that are performed on rail cars. With a history of such events we can perform degradation analytics to help the industry better understand the life-cycle of assets and asset components.

Railinc is unique in the rail industry in that it can be viewed as a data store of various information. We are just now starting to tap into this data to get a unique view into the industry. Future posts will examine some of these efforts.