Tuesday, April 15, 2014

Constitutional Textualism

The following is an essay submitted for an assignment in the Coursera Constitutional Law class taught by Professor Akhil Reed Amar of Yale University.


Some constitutional scholars, who call themselves textualists, say that the only source of meaning in constitutional law should be the text of the Constitution itself. While being faithful to the text is important, there are instances that require us to look beyond strict textualism.


We can take the example of the Vice President and the question of who shall preside over his impeachment. A strict reading of the Constitution would indicate that the Vice President himself would preside. After all, he is the presiding officer of the Senate (where impeachment trials are held) and the text of the Constitution only allows for an exception to this rule when the President is impeached. Is it absurd to think that we would allow a man to preside over his own trial? It would seem so, but only by looking outside of the literal text can that absurdity be understood.

To look outside, however, we must first start by looking at the document itself more holistically. For instance, the preamble states that the Constitution was ordained, in part, to “establish Justice.” Is it just for a man to preside over his own trial? James Madison in Federalist #10 points out that “no man is allowed to be a judge in his own cause, because his interest would certainly bias his judgment, and, not improbably, corrupt his integrity.” So, we can identify an obvious error in the document only by understanding the legal culture in which it was drafted.

To rectify this particular error, however, we can look to the text itself, but only by “reading between the lines,” as Professor Akhil Reed Amar would put it. Specifically, the Senate, with the power to “determine the Rules of its Proceedings,” could easily appoint some other party to preside over the Vice President’s impeachment.

Other than addressing obvious holes in the document, there are specific areas in the text that require interpretation beyond the literal reading. For instance, the Ninth Amendment states that “the enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.” This implies that the people have rights prior to the establishment of the Constitution. But how are we to determine these rights?

To address this problem, Professor Amar suggests that we take into account how Americans have “lived their lives in ordinary ways.” Put more broadly, we must look at the “lived Constitution.”

An example of the lived Constitution is the emergent norm of defendants testifying on their own behalf at their trial – something that was not allowed at the time the Constitution was written. Over time, however, this practice has become so commonplace that to strike it down as being unconstitutional would seem unjust.

We also have to look to a lived Constitution if we expect to address legal questions that involve modern technology. For instance, is a person’s cellphone protected from an unreasonable search under the 4th Amendment? (See U.S. v. Wurie.) Such devices today can contain a wealth of information about our lives, including financial information, personal contacts, schedules, etc. It would seem that they would be protected as personal papers and effects.

However, in Chimel v. California (1969) the Supreme Court ruled that police can search the immediate vicinity of a subject. Does this include a cellphone a subject may be carrying? If so, how much of the cellphone can be searched before such a search is considered unreasonable? Such questions need to be answered by interpreting not only the textual but also the lived Constitution.


An argument could be made that the issues listed here could be addressed by the amendment process of the Constitution. This, by its very nature, would be a more textualist approach. However, such an approach would, as John Marshall put it, “partake of the prolixity of a legal code,” and turn the Constitution into a document that “could scarcely be embraced by the human mind.” A better approach would seem to be to apply various principles and emergent norms to a more consistent, terse text. As Professor Amar would say, these principles and norms would help supplement, not supplant the text of the Constitution.

Monday, March 03, 2014

The Democratic Nature of the U.S. Constitution

The following is an essay submitted for an assignment in the Coursera Constitutional Law class taught by Professor Akhil Reed Amar of Yale University.


The U.S. Constitution opens with the statement “We the People,” announcing the most “democratic deed” in history. The democratic nature of the document is embodied in several aspects, the first of which was the ratification process.


While the document itself only required “Unanimous Consent” by state conventions, 8 of the 13 states lowered property restrictions to allow more people to vote for ratification convention delegates. This allowed for broader participation amongst the people. No other great democracy in history had allowed for such a broad level of participation in their constitution's ratification process. For instance, the Articles of Confederation, the governing document of the states prior to the Constitution, was sent out to be ratified strictly by the state legislatures - no special consideration was given to the citizens of the states. Also, the English constitution, such as it was, was never reduced to a single document on which the British people could vote. Only Massachusetts and New Hampshire had put their state constitutions to a vote by the people of the various townships (examples which set the stage for the U.S. Constitution).


There are also various provisions within the document itself that demonstrate democratic values. For instance, members of one house of the bicameral legislature, the House of Representatives, are elected biennially “by the People of the several States.” The Articles of Confederation, by contrast, had only one house and its members were chosen by the various state legislatures (except for Connecticut and Rhode Island where voters could weigh in on the selection of delegates).


The Constitution also requires the House of Representatives to change in size (initially) and apportionment based upon population growth. This would mean that the lower house would continually reflect the changing demographic shape of the people. This was not, however, a feature of the Articles of Confederation which limited states to between 2 and 7 delegates for its congress, the actual number being determined by the state legislature not by the size of the state's population. (The restriction on the ability of colonial assemblies to adjust in size based upon population shifts was actually one of the grievances against the English Crown listed in the Declaration of Independence.)
Other provisions of the Constitution that reflect its democratic nature concern qualifications for office. For one, no person can be excluded from office based upon his or her religious beliefs. As stated in Article VI, “no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States.” In 1787, many (if not all) state governments required office holders to be Christians.


The Constitution does, however, put age restrictions on those seeking to hold office. House members must be at least 25 years old, senators must be at least 30, and the president must be at least 35. While this requirement seems like it restricts participation, it can be viewed as an egalitarian feature.


In many Old World societies the young scions of aristocracy would have an advantage from an early age because of their family heritage. Putting a minimum age requirement on office seekers levels the playing field for those without such a hereditary advantage by giving them time to make their own mark in society.


One final democratic feature of the Constitution is actually something that does not appear in the document itself. That is, there are no property qualifications for office holders. Men of little or no property could now hold any office within the government, something that was uncommon at the time of the Constitution’s ratification. For instance, the English House of Commons was made up of men of vast estates, and even the old Congress under the Articles of Confederation had members whose states imposed property qualifications on delegate selection.


While the Constitution’s democratic nature was unique at the time of its ratification, it must not be forgotten that many in society were still excluded from participation in the electoral and governing processes. Over time, however, the document has been expanded to better reflect the notion of “We the People.”

Monday, August 30, 2010

NFJS 2010 in Raleigh

I attended the No Fluff Just Stuff tour in Raleigh this past weekend with a bunch of others from Railinc. After the event on Sunday I tweeted that I wasn’t all that impressed with this year’s sessions. Matthew McCullough responded asking for some details on my concerns. Instead of being terse in a tweet, I thought I’d be fair with a more lengthy response.

First off, I really wasn’t planning on going this year. I took a look at the sessions and didn’t see enough relevant content that interested me. In 2007 and 2008 I went with a group from Railinc and had a pretty good time while learning about some new things that were going on in the industry. (We didn’t go in 2009 for economic reasons.) This year, however, I felt that there wasn’t enough new content on the schedule that interested me. (I have seen/read/heard enough about REST, Grails, JRuby, Scala, etc.).

What changed my mind about going was the interest expressed by some other developers at Railinc. Since I coordinated the 2007 and 2008 trips, I thought I’d get this one coordinated, and since there was a good amount of interest, I figured I’d give it a shot as well. So, to be fair, I wasn’t going in expecting much anyway.

Here were the key issues for me:
  1. Some of the sessions did not go in the direction that I expected. To be fair, though, I was warned ahead of time to review the slides before making a decision on a session. The problem here is that some presenters relied more on demos and less on slides, so in some cases it was hard to judge by just the slide deck.
  2. Like I said above, I wasn’t planning on going in the first place because of the dearth of sessions that seemed interesting to me. I ended up going to some sessions because it was the least non-relevant session at that time. There were actually two sessions that I bailed on in the middle because I wasn’t getting any value from them.
  3. Finally, and this is completely subjective, some of the speakers just didn't do it for me. While you could tell that most (if not all) of the speakers were passionate about what they were talking about, some were just annoying about it. For instance, some of the attendees I spoke to felt that the git snobbery was a bit overkill. Some of it was just speaker style - some click with me some don't.
Some things I heard from the other Railinc attendees were
  • Too much duplication across speakers
  • Not enough detail along tracks
  • Some of the session were too introductory - could have gotten same information from a bit of googling.
Granted, some of my concerns are subjective and specific to my own oddities. But I do remember that I had enjoyed the '07 and '08 events much more.

I did, however, enjoy Matthew's first session on Hadoop. I knew very little about the technology going in and Matthew helped crystallize some things for me. I also got some good information from Neal Ford's talks on Agile engineering practices and testing the entire stack.

I really like the No Fluff Just Stuff concept in general. I think it is an important event in the technology industry. The speakers are knowledgeable and passionate which is great to see. My mind is still open about going next year, but it will be a harder sell.

Wednesday, August 25, 2010

Not so Stimulating

I sent the following to the Raleigh News & Observer:
E. Wayne Stewart says that “enormous fiscal stimulus ... to finance World War II led the U.S. out of the Depression.” While it is true that aggregate economic indicators (e.g., unemployment and GDP) improved during the war, it was not a time of economic prosperity.

During World War II the U.S. produced a lot of war material, not consumer goods. It was a time when citizens went without many goods and raw materials due to war-time rationing. It was also a time when wages and prices were set by government planning boards. In short, it was a time of economic privation for the general public. It wasn't until after the war, when spending was drastically reduced, that the economy returned to a sense of normalcy.

The lesson we should learn is that, yes, it is possible for government to spend enough money to improve aggregate economic indicators. That same spending, however, can distort the fundamentals of the economic structure in ways that are not wealth-producing as determined by consumer preferences.
This argument, that government spending during WWII got us out of the Depression, is used by many to justify economic stimulus. The argument I use above comes from Robert Higgs and his analysis of the economy during the Depression and WWII.

For me, though, the biggest problem with the "just spend" argument is that it ignores the nuances and subtly of a market-based, consumer-driven economy. It is like saying that to get a 1000 word essay to a 2000 word essay all you need to do is add 1000 words. There is no thought into the idea that those extra words need to fit into the overall essay in a coherent manner. A productive economy needs spending to occur in the proper places at the proper times, and it is the market process that does this most efficiently (not completely efficiently, but better than the alternatives).

Prediction Markets at a Small Company

Railinc has recently started a prediction market venture using Inkling software. We have been using it internally to predict various events including monthly revenue projections and rail industry traffic volume. In July, we also had markets to predict World Cup results. While this experience has been fun and interesting, I can't claim it has been a success.

The biggest problem we've had is with participation. There is a core but small group of people who participate regularly, while most of the company hasn't even asked for an account to access the software. When I first suggested this venture I was skeptical that it would work at such a small company (just under 200 staff) primarily because of this problem. From the research I saw, other companies using prediction markets only had a small percentage of employees participate as well. However, those companies were much larger than Railinc, so the total number participating was much greater.

Another problem that is related to participation is the number of questions being asked. Since we officially started this venture I've proposed all but one of the questions/markets. While I know a lot about the company, I don't know everything that is needed to make important business decisions. Which brings up another problem - in such a small company do you really need such a unique mechanism to gather actionable information from such a limited collective?

Even considering these problems we venture forward and look for ways to make prediction markets relevant at Railinc. One way to do this is through a contest. Starting on September 1 we will have a contest to determine the best predictor. At the Railinc holiday party in December we will give an award to the person with the largest portfolio as calculated by Inkling. (The award will be similar to door prizes we've given out at past holiday parties.) I've spent some time recently with the CIO of Railinc to discuss some possible questions we can ask during this contest. We came up with several categories of questions including financial, headcount, project statistics, and sales. While I am still somewhat skeptical, we will see how it plays out.

We are also looking to work with industry economists to see if Railinc could possibly host an industry prediction market. This area could be a bit more interesting, in part, because of the potential size of the population. If we can get just a small percentage of the rail industry participating in prediction markets we could tap into a sizable collective.

Over the coming months we'll learn a lot about the viability of prediction markets at Railinc. Even if the venture fails internally, my hope is to make some progress with the rail industry.

Thursday, August 12, 2010

Geospatial Analytics using Teradata: Part II - Railinc Source Systems

[This is Part II in a series of posts on Railinc's venture into geospatial analytics. See Part I.]

Before getting into details of the various analytics that Railinc is working on, I need to explain the source data behind these analytics. Ralinc is basically a large data store of information that is received from various parties in the North American rail industry. We process, distribute, and store large volumes of data on a daily basis. Roughly 3 million messages are received from the industry which can translate into 9 million records to process daily. The data is categorized in four ways:
  • Asset - rail cars and all attributes for those rail cars
  • Asset health - damage and repair information for assets
  • Movement - location and logistic information for assets
  • Industry reference - supporting data for assets including stations, commodities, routes, etc.
Assets (rail cars) are at the center of almost all of Railinc's applications. We keep the inventory of the nearly 2 million rail cars in North America. For the most part, data we receive either has an asset component or in some way supports asset-based applications. The analytics that we are currently creating from this data falls into three main categories: 1) logistics, 2) management/utilization, and 3) health.

Logistics is an easy one because movement information encompasses the bulk of the data we receive on a daily basis. If there is a question about the last reported location of a rail car, we can answer it. The key there is "last reported location." Currently we receive notifications from the industry whenever a predefined event occurs. These events tend to occur at particular locations (e.g., stations). In between those locations is a black hole for us. At least for now, that is. More and more rail cars are being equipped with GPS devices that can pin point a car's exact location at any point in time. We are now working with the industry to start receiving such data to fill in that black hole.

Management/utilization requires more information than just location, however. If a car is currently moving and is loaded with a commodity then it is making money for its owner; if it is sitting empty somewhere then it is not. Using information provided by Railinc, car owners and the industry as a whole can get a better view into how fleets of cars are being used.

Finally, asset health analytics provide another dimension into the view of the North American fleet. Railinc, through its sister organization TTCI, has access to events recorded by track-side detectors. These devices can detect, among others, wheel problems at speed. TTCI performs some initial analysis on these events before forwarding them on to Railinc which then creates alert messages that are sent to subscribers. Railinc will also collect data on repairs that are performed on rail cars. With a history of such events we can perform degradation analytics to help the industry better understand the life-cycle of assets and asset components.

Railinc is unique in the rail industry in that it can be viewed as a data store of various information. We are just now starting to tap into this data to get a unique view into the industry. Future posts will examine some of these efforts.

Tuesday, July 06, 2010

The Science and Art of Prediction Markets

What constitutes a good question for a prediction market? Obviously, for the question to be valuable the answer should provide information that was not available when the question was originally asked. Otherwise, why ask the question. Value, however, is only one aspect of a good question. For prediction markets to function in a useful manner the questions that are asked must also be constructed properly. There is both a science and an art to this process.

The Science

There are three criteria to keep in mind when constructing a question for a prediction market:
  • The correct answer must be concrete
  • Answers must be determined on specific dates
  • Information about possible answers can be acquired before the settled date
Concreteness is important because it settles the question being asked - the result is not open to interpretation. An example of a question with a vague answer would be "What policy should the U.S. government enact to encourage economic growth? A) Subsidizing green energy, B) free trade, C) fiscal austerity, D) health care reform." One problem here is that the time frame to accurately answer this question could be extensive. Also, the complexities of economic growth make it difficult to tease out the individual variables that would be necessary to concretely answer the question. If two or more answers are correct (whatever that may mean) then the market may end up reflecting the value judgments of the participants, not objective knowledge. This type of question is more suited for a poll rather than a prediction.

Not only should answers be concrete, there should be some point in time when each answer can be determined to either have occurred or not have occurred. A question that never gets resolved can hamper the prediction process by reducing the incentive to invest in that market. (Can a non-expiring question be valuable? Could the ongoing process of information discovery be useful? Questions to ponder.)

This doesn't mean, however, that every answer must be determined on the same date. Wrong answers can be closed as the process unfolds. Once the correct answer is determined, however, the market should be closed. For example, take the question "Which candidate will win the 2012 Republican Party nomination for U.S. President?" If this question is asked in January of 2012 there could be several possible answers (one for each candidate). As the year progresses to the Republican Party convention, several candidates will drop out of the election. The prediction market would then close out those answers (candidates) but stay open for the remaining answers. Weeding out wrong answers over time is part of the discovery process.

The final criterion - the ability to acquire information before the settled date - is what separates prediction markets from strict gambling. If all participants are in the dark about a question until that question is settled, then there is little value in asking the question. Prediction markets are powerful because they allow participants to impart some knowledge into the process over a period of time. The resulting market prices can then provide information that can be acted upon throughout the process. If participants cannot acquire useful information to incorporate into the market, then market activity is nothing more than playing roulette where all answers are equally possible until the correct answer is determined.

A good example to illustrate the above criteria is a customer satisfaction survey. Railinc uses a bi-annual (twice a year) survey to gauge customer sentiment on a list of products. For each product, customers are asked a series of questions the answers to which range from 1 (disagree) to 5 (agree). The answers are then averaged with a final score for each product ranging from 1-5 (the goal is to get as close to 5 as possible).

The following market could be set up for Railinc employees:
What will the Fourth Quarter 2010 customer satisfaction score be for product X?
  • Less than or equal to 4.0
  • Between 4.1 and 4.4 (inclusive)
  • Greater than or equal to 4.5
The value of this market is that Railinc management and product owners may get some insight into what employees are hearing from customers. Customer Service personnel could have one view based upon their interactions with customers, while developers may have a different view. Over time, management and product owners could take actions based upon market movements.

As far as concreteness is concerned, the final answer for this question will be determined when the survey is completed (e.g., January 2011), and it will be a specific number that falls into one of the ranges given by the answers.

This market also satisfies the last criteria regarding the ability to acquire information before the market is settled. This is important because this is where the value of the market is realized. As Railinc employees (i.e., market participants) gain knowledge over time they can incorporate that knowledge into the market via the buying and selling of shares in the provided answers.

The Art

In the example given above regarding the customer satisfaction survey, the answers provided were not arbitrary - they were selected to maximize the value of the market. This is where the art of prediction markets is applied.

If the possible answers for a customer survey are 1-5 why not provide five separate answers (1-1.9, 2-2.9, 3-3.9, 4-4.9, 5)? Why not have two possible answers (below 2.5 and above 2.5)? The selection of possible answers is partially determined by what is already known about the result. In the case of the survey, past results may have shown that this particular product has average a 4.1. It is highly unlikely that the survey results will drop to the 1-1.9 range. Providing such an answer would not be valuable because market participants would almost immediately short that position. This is still information, but it is information that is already known. What is desired is insight to what is not known. The answers provided in the above example will give some insight into whether the product is continuing to improve or whether it is digressing.

So, the selection of possible answers to market questions must take into account what is already known as well as what is unknown. What do you know about what you don't know?

Conclusion

Good questions make good prediction markets. Constructed properly, these questions can be a valuable tool in the decision making process of an organization.