«Big Data» og «Little Data»

Først Paul Krugman, så Mark Thoma:

Big Data?:

Paul Krugman:


Data, Stimulus, and Human Nature, by Paul Krugman
:

David Brooks
writes about the limitations of Big Data, and makes some good
points. But he goes astray, I think, when he touches on a subject near and dear
to my own data-driven heart:

For example, we’ve had huge debates over the best economic stimulus, with
mountains of data, and as far as I know not a single major player in this
debate has been persuaded by data to switch sides.

Actually, he’s not quite right there, as I’ll explain in a minute. But it’s
certainly true that neither stimulus advocates nor hard-line stimulus opponents
have changed their positions. The question is, does this say something about the
limits of data — or is it just a commentary on human nature, especially in a
highly politicized environment?

For the truth is that there were some clear and very different predictions
from each side of the debate… On these predictions, the data have spoken
clearly; the problem is that people don’t want to hear…, and the fact that
they don’t happen has nothing to do with the limitations of data. …

That said, if you look at players in the macro debate who would not face huge
personal and/or political penalties for admitting that they were wrong, you
actually do see data having a considerable impact. Most notably, the IMF has

responded to the actual experience of austerity
by conceding that it was
probably underestimating fiscal multipliers by a factor of about 3.

So yes, it has been disappointing to see so many people sticking to their
positions on fiscal policy despite overwhelming evidence that those positions
are wrong. But the fault lies not in our data, but in ourselves.

Ill just add that when it comes to the debate over the multiplier and the
macroeconomic data used to try to settle the question, the term ‘Big Data’
doesnt really apply. If we actually had ‘Big Data,’ we might be able to get
somewhere but as it stands — with so little data and so few relevant historical
episodes with similar policies — precise answers are difficult to ascertain.
And its even worse than that. Let me point to something David Card said in an
interview I

posted
yesterday:

I think many people are concerned that much of the research they see is
biased and has a specific agenda in mind. Some of that concern arises
because of the open-ended nature of economic research. To get results,
people often have to make assumptions or tweak the data a little bit here or
there, and if somebody has an agenda, they can inevitably push the results
in one direction or another. Given that, I think that people have a
legitimate concern about researchers who are essentially conducting advocacy
work.

If we had the ‘Big Data’ we need to answer these questions, this would be
less of a problem. But with quarterly data from 1960 (when money data starts,
you can go back to 1947 otherwise), or since 1982 (to avoid big structural
changes and changes in Fed operating procedures), or even monthly data (if you
dont need variables like GDP), there isnt as much precision as needed to
resolve these questions (50 years of quarterly data is only 200 observations).
There is also a lot of freedom to steer the results in a particular direction
and we have to rely upon the integrity of researchers to avoid pushing a
particular agenda. Most play it straight up, the answers are however they come
out, but there are enough voices with agendas — particularly, though not
excusively, from think tanks, etc. — to cloud the issues and make it difficult
for the public to separate the honest work from the agenda based, one-sided,
sometimes dishonest presentations. And there are also the issues
noted above about people sticking to their positions, in their view honestly even if it is the result of data-mining, changing assumptions until the results come out ‘right,’ etc., because the data doesnt provide enough clarity to force them to give up their
beliefs (in which theyve invested considerable effort).

So I wish we had ‘Big Data,’ and not just a longer time-series of macro data, it would also be useful to re-run the economy hundreds or thousands of times, and evaluate monetary and fiscal policies across these experiments. With just one run of the economy, you cant always be sure that the uptick you see in historical data after, say, a tax cut is from the treatment, or just randomness (or driven by something else). With many, many runs of the economy that can be sorted out (cross-country comparisons can help, but the all else equal part is never satisfied making the comparisons suspect).

Despite a few research attempts such as the billion price project, ‘Little Data’ and all the problems that come with it is a better description of empirical macroeconomics.

(Via Economist’s View.)

Legg igjen en kommentar

Fyll inn i feltene under, eller klikk på et ikon for å logge inn:

WordPress.com-logo

Du kommenterer med bruk av din WordPress.com konto. Logg ut / Endre )

Twitter picture

Du kommenterer med bruk av din Twitter konto. Logg ut / Endre )

Facebookbilde

Du kommenterer med bruk av din Facebook konto. Logg ut / Endre )

Google+ photo

Du kommenterer med bruk av din Google+ konto. Logg ut / Endre )

Kobler til %s

%d bloggers like this: