Monday, July 20, 2009

Logical Reasoning Week - Question Stem vs Task Standards

Welcome to Logical Reasoning Week!

Monday through Thursday we'll be discussing the oft-misunderstood and -maligned section of the LSAT. Later posts will cover how the section has changed since the late 1990s, a graph of the most common LSAT question types (or task standards, as we call them at Zen), and how to correct mistakes made on the most important of the 22 task standards.

First off, a succinct definition from Wikipedia, edited to fit within Zen philosophy:

The [LSAT] contains two logical reasoning sections [of 24-26 questions each], commonly known as "arguments" or "LR". Each question begins with a [stimulus] that presents either an argument or a short set of facts. The paragraph is followed by a [question stem] asking the test taker to [analyze the logical structure, evaluate the evidence use, find similarly structured logic, support it with principles, extrapolate additional conclusions, identify assumptions made, or analyze evidence from multiple viewpoints]. Most paragraphs are followed by a single prompt, although a few are followed by two.


While the analytical reasoning games receive much more ballyhoo from test-takers (and test-preppers alike), logical reasoning questions account for more than double the points on a modern LSAT! As an example of the disparity in coverage, "LSAT analytical reasoning" and "LSAT logic games" collectively have almost 200,000 hits on Google, while "LSAT logical reasoning" has fewer than 30,000. That's a misrepresentation of importance by a factor of almost 14!

Media saturation for the diagramming section is so pronounced that many of our Zen students confuse "logical reasoning" for "logic games;" one could argue that the LSAC has brand recognition issues. It could also be that logic games have
historically been considered much more responsive to study, and more importantly, tutoring. At Zen, we posit that test prep companies and tutors don't know how to categorize, track, and teach the logical reasoning sections, so they focus their efforts where they know they can achieve results.

Which is where standards-based instruction from our experience at Teach For America comes in. It's worth noting that the PowerScore line of books attempts to provide such standards by breaking down logical reasoning into 13 different types of questions based on the relationship between stimulus, stem, and answer; however, we feel that system both overcomplicates the situation and misrepresents important facets of the logical reasoning sections. Zen students break down the section not by stimulus or question stem, but by task.

That is, the PowerScore method is merely nomenclature, while the Zen method is outcome driven. You can think of the difference as between telling someone "Driving" and "Drive to the mall and park in front of Best Buy." The former is a gerund posing as a task, whereas Zen provides the imperative phrase - the task - to be performed in order to receive credit for an answer.

After analyzing over 1,000 individual logical reasoning questions (from TestPreps 19-38 and the three most recent June LSATs), we broke down the questions into 22 discrete action verb statements, "task standards," which were then woven with other related tasks into seven overarching "strands." Our Zen students then input their test performance into an excel spreadsheet tracker so they can easily see which of the task standards they are performing incorrectly. Because the task standards are not based on arbitrary categories but on desired performance, it is the difference between saying, "This is an assumption question" and
"I need to find the gap before the conclusion."

Our task standards allow us to clearly delineate, for instance, between "Find the error the author made in logic" and "Find the evidence the author should have cited," even though both tasks are prompted by the LSAT question stem with the same wording: some variation on "Which flaw did the author make?" As you can see, conflating these two tasks is both easy and damaging; without task standards, do you look for an error in logic or evaluate potential evidence?

Students who do not use some form of task standards have serious issues with scope on the LSAT. They aren't given the skills to recognize when to consider outside information and when to focus solely within the confines of the text, as reading the question stem is insufficient to determine this. While PowerScore recognizes the crucial interplay between stimulus, stem, and correct answer, their system is not as finely tuned with only 59% of the discrete tasks we have identified.

Tomorrow, look for a further breakdown of task strands, and specifically how the density of tasks has changed since the late 1990s.