«It’s quite fashionable to say the education system is broken. It’s not broken… It’s wonderfully constructed. It’s just that we don’t need it anymore. It’s outdated»
Sammenhengen er som følger: I USA har statene et balansert-budsjett krav. New York og Ohio og Navada og de andre 40 kan ikke bruke mer enn de får inn (det er bare 7 stater som kan overføre et underskudd til neste års budsjett).
Få er klar over at dette bare gjelder for den operative-delen av budsjettet – og ikke for kapital-delen av budsjettet.
Det operative budsjettet er for lønninger og pensjoner, for helseordninger og overføringer til kommuner.
Kapitalbudsjettet er for veibygging, til skolebygninger og politi/brannvesen.
Hvis du behandler budsjettet som en enhet, vil du – hvis du hadde vært en husholdning – aldri fått lov til å kjøpe et hus eller en bil. Og det er mange som har problemer med å definere konsum og investering.
Of course, families could draw down savings to buy homes and cars. But that’s an option not available to the government because it has no savings, only a large debt. Treating it and private individuals the same way, as balanced-budget supporters propose, would require the entire national debt to be paid off and a surplus accumulated before it would be permitted to make new investments in roads, bridges, buildings and other long-lived assets.
Of course, no one actually believes that. But it follows logically from arguments one often hears about why the government should balance its cash income and outlays annually, because that is supposedly how families and the states are said to operate. In fact, they don’t.
The distinction between capital spending and consumption spending also affects the way economists interpret the rate of saving. The standard measure, produced by the Commerce Department, calculates personal income and personal outlays. The difference between these two figures is assumed to be personal saving. Thus saving is not calculated directly, but is merely a residual between income and spending.
An alternative measure of saving that treats consumer durables, like autos, as investments would raise the measured rate of saving considerably. Alternatively, one could measure saving directly from financial institutions and other sources, as the Federal Reserve does (see Page 17). This yields a much higher measure of saving. In 2011, the last full year available, the Commerce Department estimated the personal saving rate at 4.2 percent, while the Fed put it at 10.3 percent.
Periodically, administrations have suggested creating a capital budget, both to give clearer picture of the economic effects of federal spending and to shield investments from budget cuts that should be limited to consumption outlays. The Reagan administration floated the idea in 1986, and the Clinton administration created a commission to study it.
A common criticism has always been that the definition of “capital” is too slippery and could too easily become a loophole through which consumption spending could escape. The obvious answer is to assign some entity, like the Government Accountability Office, to audit investment spending and ensure that it truly represents investment and not consumption.
Many economists say they believe that the best thing the federal government can do to raise the long-term economic growth rate is increase infrastructure spending. It would have the double benefit of mobilizing idle resources, especially unemployed workers, while low interest rates permit capital projects to be financed very cheaply.
One main barrier to achieving this double benefit is the confusion between investment spending and consumption spending, which is distorted by the way the budget is presented and the way we calculate saving.
These are all important methods and concepts related to statistics that are not as well known as they should be. I hope that by giving them names, we will make the ideas more accessible to people:
Mister P: Multilevel regression and poststratification.
The Secret Weapon: Fitting a statistical model repeatedly on several different datasets and then displaying all these estimates together.
The Superplot: Line plot of estimates in an interaction, with circles showing group sizes and a line showing the regression of the aggregate averages.
The Folk Theorem: When you have computational problems, often there’s a problem with your model.
The Pinch-Hitter Syndrome: People whose job it is to do just one thing are not always so good at that one thing.
Weakly Informative Priors: What you should be doing when you think you want to use noninformative priors.
P-values and U-values: They’re different.
Conservatism: In statistics, the desire to use methods that have been used before.
WWJD: What I think of when I’m stuck on an applied statistics problem.
Theoretical and Applied Statisticians, how to tell them apart: A theoretical statistician calls the data x, an applied statistician says y.
The Fallacy of the One-Sided Bet: Pascal’s wager, lottery tickets, and the rest.
Alabama First: Howard Wainer’s term for the common error of plotting in alphabetical order rather than based on some more informative variable.
The USA Today Fallacy: Counting all states (or countries) equally, forgetting that many more people live in larger jurisdictions, and so you’re ignoring millions and millions of Californians if you give their state the same space you give Montana and Delaware.
Second-Order Availability Bias: Generalizing from correlations you see in your personal experience to correlations in the population.
The “All Else Equal” Fallacy: Assuming that everything else is held constant, even when it’s not gonna be.
The Self-Cleaning Oven: A good package should contain the means of its own testing.
The Taxonomy of Confusion: What to do when you’re stuck.
The Blessing of Dimensionality: It’s good to have more data, even if you label this additional information as “dimensions” rather than “data points.”
Scaffolding: Understanding your model by comparing it to related models.
Ockhamite Tendencies: The irritating habit of trying to get other people to use oversimplified models.
Bayesian: A statistician who uses Bayesian inference for all problems even when it is inappropriate. I am a Bayesian statistician myself.
Multiple Comparisons: Generally not an issue if you’re doing things right but can be a big problem if you sloppily model hierarchical structures non-hierarchically.
Taking a model too seriously: Really just another way of not taking it seriously at all.
God is in every leaf of every tree: No problem is too small or too trivial if we really do something about it.
As they say in the stagecoach business: Remove the padding from the seats and you get a bumpy ride.
Story Time: When the numbers are put to bed, the stories come out.
The Foxhole Fallacy: There are no X’s in foxholes (where X = people who disagree with me on some issue of faith).
The Pinocchio Principle: A model that is created solely for computational reasons can take on a life of its own.
The statistical significance filter: If an estimate is statistically significant, it’s probably an overestimate.
Arrow’s other theorem (weak form): Any result can be published no more than five times.
Arrow’s other theorem (strong form): Any result will be published five times.
The Ramanujan principle: Tables are read as crude graphs.
The paradox of philosophizing: If philosophy is outlawed, only outlaws will do philosophy.
Defaults: What statistics is the science of.
I know there are a bunch I’m forgetting; can youall refresh my memory, please? Thanks.
P.S. No, I don’t think I can ever match Stephen Senn in the definitions game.
The lead lawsuit, which the company dismissed as groundless, says the alcohol content is mislabeled on the brands Budweiser, Michelob, Michelob Ultra, Hurricane High Gravity Lager, King Cobra, Busch Ice, Natural Ice, Bud Ice, Bud Light Platinum and Bud Light Lime.
Attorneys for the plaintiffs say their lawsuit, filed in federal court in San Francisco on Friday, could affect tens of millions of consumers of products from Anheuser-Busch, the world’s largest brewer.
Josh Boxer, an attorney behind the legal challenge, acknowledged his San Rafael, California-based Mills Law Firm is not basing its claims on independent testing of Anheuser-Busch products taken from store shelves.
«We learned about the mislabelling from a number of former employees of AB (Anheuser-Busch) at breweries throughout the United States,» Boxer said. «And some high-level guys at the brewery level all told us that as a matter of AB corporate policy, these target brands are watered down.»
Vi vet alle hva Homer Simpson ville ha sagt.
Michael Jordan rundet 50 år den 17. februar. Det er nå 10 år siden sist han angrep kurven i NBA for Washington Wizards, men selv tiden kan ikke angripe Jordan. Han er fortsatt den som har som har scoret flest poeng i snitt i en sesong (30,19 per kamp) og den som har scoret flest poeng i snitt i playoff (33,45 per kamp).
Og forresten, han er fortsatt den best betalte spilleren.
Tro det eller ei, hverken LeBron James, Kobe Bryant, Kevin Garnett, Shaquille O’Neal, eller Dirk Nowitzki har fått bedre betalt i løpet av en sesong.
33 millioner dollar fikk Jordan for å spille for Chicago Bulls i sesongen 1997-1998 (Chicago vant ligaen det året). Siden har ingen fått mer betalt, mens summen av penger i NBA har økt. Alt har økt.
Historien om hvordan penger har preget amerikansk sport begynner for basketball allerede i 1946, samme året NBA startet. Da fikk ingen lag betale ut mer enn 55 000 dollar i lønn. På den måten skulle det bli en fair fordeling av spillere, og ikke en styrtrik onkel som fristet de beste spillerne til ett lag.
Denne mentaliteten ligger bak det moderne lønnstaket som ble innført i NBA i 1984. Da gikk grensen på 3,6 millioner dollar, og har økt siden. I år er grensen på 58 millioner dollar.
Bakgrunnen er enkel. Ved å sette et tak kan de mindre lagene i de mindre byene få en ganske rettferdig sjanse på å by på de beste spillerne.
Det hadde vært enklere for disse små lagene, med små pengebinger, hvis lønnstaket hadde vært absolutt. Selvfølgelig er det ikke det.
NBA har soft cap. Det vil si at lagene står fritt til å gå over lønnstaket, men da vil det koste – da slår luksusskatten inn.
Her er litt matte. De tre best betalte spillerne på Los Angeles Lakers i 2012-2013 sesongen er Kobe Bryant (27,8 millioner dollar), Dwight Howard (19,5 millioner dollar), og Paul Gasol (19 millioner dollar). Det er 66,3 millioner dollar. 8,3 millioner over grensen. Og det gjenstår 12 spillere på lønningslisten.
I alt betaler LA Lakers nærmere 100 millioner dollar i spillerlønninger i 2012-2013 sesongen. Luksusskatten slår inn etter 70 millioner, og det er 1 dollar for hver dollar over denne grensen. Det betyr at LA Lakers betaler 30 millioner dollar i luksusskatt.
Disse pengene går til en pott som NBA deler ut, hvor de minste markedene med de laveste tv-avtalene får mest.
NBA er det mest konkrete eksempelet hvor luksusskatt brukes for å bestytte merkevaren NBA, og sikre at de mindre markedene får en sjanse til å bli attraktive og tiltrekke talent.
Hvis du trodde LA Lakers var ett ekstremt eksempel – da bytter vi sport. I baseball har de det samme prinsippet – soft cap. Og hvilket lag står for 91,5% av all luksusskatt i baseball.
The New York Yankees.
Bildet over finner du på side 39 i rapporten (pdf), og forteller historien om helsearkiv og papirarbeid i Malawi.
Helsetjenester under forbedring, men viktige mål gjenstår
Utviklingen i Malawi er positiv når det gjelder barne- og mødrehelse og tilgang på grunnleggende helsetjenester. Sentrale mål for redusert barne- og mødredødelighet og styrking av helsesystemet er likevel ikke nådd.
Retningslinjer ikke i samsvar med forutsetninger
Retningslinjene for budsjettstøtte bør presiseres og oppdateres slik at de blir i samsvar med Stortingets forutsetninger for når slik støtte kan gis.
Oppfølgingen av at midlene blir brukt som forutsatt har klare svakheter
- sørge for at regnskap fra mottakerlandets myndigheter og innholdet i årlige revisjonsrapporter fra eksterne revisorer blir fulgt opp
- styrke kompetansen og kapasiteten til ppfølging av offentlig finansforvaltning
Når dama ikke lenger er lønnsom:
«Susan, we need to talk. I’ve been doing a lot of thinking lately. About us. I really like you, but ever since we met in that econ class in college I knew there was something missing from how I felt: quantitative reasoning. We can say we love each other all we want, but I just can’t trust it without the data. And after performing an in-depth cost-benefit analysis of our relationship, I just don’t think this is working out.
Please know that this decision was not rash. In fact, it was anything but—it was completely devoid of emotion. I just made a series of quantitative calculations, culled from available OECD data on comparable families and conservative estimates of future likelihoods. I then assigned weights to various ‘feelings’ based on importance, as judged by the relevant scholarly literature. From this, it was easy to determine that given all of the options available, the winning decision on both cost-effectiveness and comparative-effectiveness grounds was to see other people.
It’s not you, it’s me. Well, it’s not me either: it’s just common sense, given the nature of my utility function.
The calculations are fairly simple. At this point in my life, the opportunity cost of hanging out with you is fairly high. Sex with you grants me seventeen utils of pleasure, but I derive negative utils from all of the cuddling afterwards and the excessive number of buttons on your blouse that makes it very difficult to maneuver in the heat of the moment. I also lose utils when you do that weird thing with your hands that you think is affectionate but feels almost like you’re scratching me. Overall, I derive thirteen utils of pleasure on a typical Friday night with you, or fourteen if we watch The Daily Show as part of it (fifteen if they have a good guest on the show).
Meanwhile, I could be doing plenty of other things instead of spending time with you. For example, I could be drinking at the Irishman with a bunch of friends from work. I derive between 20 and 28 utils from hitting on drunk slutty girls at the bar. Since Jeff always buys most of the drinks anyways, the upfront pecuniary costs are low, and I have no potential negatives in terms of emotional investment. However, most of those girls don’t laugh at my jokes, which drives down utils gained. Thus, I could get between 14 and 21 utils from a night out at the bar.
If you’re looking for the kind of guy who’s interested in maximizing the worst-off outcome regardless of potential gains—well, I’m not that guy. All you have to do is look at the probabilities and compare the feasible range of outcomes in terms of number of units of pleasure to see that we’re going to have to call this relationship quits.
This may feel cold, but there’s nothing cold about well-reasoned analysis.
Like all humans, I know I am fallible—and since I have a natural tendency to improperly discount the future, I have made sure to accurately determine present future value of costs and benefits. But even considering the diminishing marginal returns of hitting on the aforementioned drunk slutty girls, the numbers simply do not want us to be together.
I know this breakup might come as a bit of a shock to you, which I have also factored in. The disappointed look on your face costs me 5 utils of pleasure, but the knowledge that this is the right decision in the long-term makes up for that. Additionally, I have included in my calculations the fact that as a courtesy I will have to pay for this dinner in its entirety, which, given the gender parity we have previously expressed in our relationship, would normally cost me only half that.
I want you to know that this decision isn’t just for me—it’s for you, too. I’ve done the calculations. There are plenty of eligible bachelors out there who are probably able to more vigorously, consistently, and knowledgeably have sexual intercourse with you. While the thought of you being with someone else causes me a substantial negative utility that makes me feel as though I am going to vomit, I know that in the aggregate everyone is better off, and therefore it is the right decision for us to make.
There’s no need to try to persuade me otherwise, Susan. We just can’t let our feelings get in the way of the math.
In the meantime, I need to get back home. My utility calculations tell me that the best thing I can do right now is strip down to my boxers, microwave a quesadilla, and watch a bunch of episodes of The Wire. It might seem strange and horribly unproductive, but it’s not me—it’s just my utility function.»
HOW to define big data? At a meeting of the Organisation for Economic Co-operation and Development last week, about 150 delegates were asked to raise their hands if they had heard of the term—all had. How many felt comfortable giving a definition? Only about 10%. And these were government officials who will be called upon to devise policies on supporting or regulating big data.
The conference theme was ‘knowledge-based capital’. The good news is that the wise minds at the OECD can see there is something new and important taking place with the role of data in business and society, and they want to shape the intellectual agenda. The problem is that the civil servants and academics who flock to the meetings arent always as avant garde in their thinking. That is not to say the event wasnt useful—it was. But there is much more to play for.
Take the session on accounting for intangibles. Thomas Günther of the Technical University of Dresden spoke about the ridiculousness of accountancy rules for things like brands, in which ‘goodwill’ is treated as an expense but not an asset if it is developed internally; it only becomes an asset if it is acquired through a market transaction—even though it is the same thing, one day to the next. This begs the question of how to place a value on the data that firms hold. For instance, as much as one-third of Amazon’s sales are said to come from its recommendation engine: shouldn’t the vast pool of data that the company has on customers be considered an asset? But the idea went unaddressed.
Then there is intellectual property. It is a major impediment for big data. For example, data-mining techniques to put tens of thousands of research papers through a computer to spot patterns that may otherwise be missed (such as a drugs’ side-effects being eliminated in the presence of another drug) has shown promise. But copyright law means these sort of ‘meta mining’ studies require researchers to buy access to each article, just as if it were the 19th century and a pair of human eyes were to read it. Yet this and other issues weren’t raised. Instead, officials from national patent offices bellyached about things like the backlog and time it takes to examine a patent. Worthy matters, yes, but not cutting-edge ones.
Just as delegates thought they were talking about ‘knowledge-based capital’ as something with which they were familiar, the issues were more novel than they imagined. The uses of information are different with big data than in the past. They talk about one thing, but something else is happening. They are looking at the front door when it’s creeping through the side window.
The American economist Michael Mandel of the Progressive Policy Institute did an excellent job of putting a stake in the ground, defining big data as the idea that data is an economic input as well as an output (building on his paper on measuring the data-driven economy from last autumn). Jakob Haesler, the boss of Tinyclues, a Paris-based startup, raised a worry that big data may mean we lose a degree of transparency in why computers make the decisions they do. A recent article in Nature relied on a formula so complex that it couldnt be published in print. And as the data always change and algorithms adapt, it is not clear that the scientific standard of reproducibility may hold. Big data raises epistemological questions, he concluded.
Depressingly, a European delegate asked about possibly of taxing data as a way to fill national coffers. The idea is similar to the ‘bit tax’ that floated around Brussels in the 1990s. A big data tax smacks as retrograde—czarist!—that a nation might strangle a nascent area of economic growth by having the state enrich itself before its citizens can even reap the benefits.
The best comment of the day came from Andrew Wyckoff, the director of the division handling science and technology. The big data world relies on information, but we dont have any information about it with which to understand what is happening. How do we get the data?, he asked. As he explained: we have really good figures for R&D spending because companies break it out in their stock-market filings. And they do that because they get a tax credit. So what do we need to do to get the information from companies without having to pay them off?
‘To measure is to know,’ Lord Kelvin is said to have remarked. We need data about big data.
(Via Daily chart.)
Først Paul Krugman, så Mark Thoma:
Data, Stimulus, and Human Nature, by Paul Krugman:
David Brooks writes about the limitations of Big Data, and makes some good
points. But he goes astray, I think, when he touches on a subject near and dear
to my own data-driven heart:
For example, we’ve had huge debates over the best economic stimulus, with
mountains of data, and as far as I know not a single major player in this
debate has been persuaded by data to switch sides.
Actually, he’s not quite right there, as I’ll explain in a minute. But it’s
certainly true that neither stimulus advocates nor hard-line stimulus opponents
have changed their positions. The question is, does this say something about the
limits of data — or is it just a commentary on human nature, especially in a
highly politicized environment?
For the truth is that there were some clear and very different predictions
from each side of the debate… On these predictions, the data have spoken
clearly; the problem is that people don’t want to hear…, and the fact that
they don’t happen has nothing to do with the limitations of data. …
That said, if you look at players in the macro debate who would not face huge
personal and/or political penalties for admitting that they were wrong, you
actually do see data having a considerable impact. Most notably, the IMF has
responded to the actual experience of austerity by conceding that it was
probably underestimating fiscal multipliers by a factor of about 3.
So yes, it has been disappointing to see so many people sticking to their
positions on fiscal policy despite overwhelming evidence that those positions
are wrong. But the fault lies not in our data, but in ourselves.
Ill just add that when it comes to the debate over the multiplier and the
macroeconomic data used to try to settle the question, the term ‘Big Data’
doesnt really apply. If we actually had ‘Big Data,’ we might be able to get
somewhere but as it stands — with so little data and so few relevant historical
episodes with similar policies — precise answers are difficult to ascertain.
And its even worse than that. Let me point to something David Card said in an
I think many people are concerned that much of the research they see is
biased and has a specific agenda in mind. Some of that concern arises
because of the open-ended nature of economic research. To get results,
people often have to make assumptions or tweak the data a little bit here or
there, and if somebody has an agenda, they can inevitably push the results
in one direction or another. Given that, I think that people have a
legitimate concern about researchers who are essentially conducting advocacy
If we had the ‘Big Data’ we need to answer these questions, this would be
less of a problem. But with quarterly data from 1960 (when money data starts,
you can go back to 1947 otherwise), or since 1982 (to avoid big structural
changes and changes in Fed operating procedures), or even monthly data (if you
dont need variables like GDP), there isnt as much precision as needed to
resolve these questions (50 years of quarterly data is only 200 observations).
There is also a lot of freedom to steer the results in a particular direction
and we have to rely upon the integrity of researchers to avoid pushing a
particular agenda. Most play it straight up, the answers are however they come
out, but there are enough voices with agendas — particularly, though not
excusively, from think tanks, etc. — to cloud the issues and make it difficult
for the public to separate the honest work from the agenda based, one-sided,
sometimes dishonest presentations. And there are also the issues
noted above about people sticking to their positions, in their view honestly even if it is the result of data-mining, changing assumptions until the results come out ‘right,’ etc., because the data doesnt provide enough clarity to force them to give up their
beliefs (in which theyve invested considerable effort).
So I wish we had ‘Big Data,’ and not just a longer time-series of macro data, it would also be useful to re-run the economy hundreds or thousands of times, and evaluate monetary and fiscal policies across these experiments. With just one run of the economy, you cant always be sure that the uptick you see in historical data after, say, a tax cut is from the treatment, or just randomness (or driven by something else). With many, many runs of the economy that can be sorted out (cross-country comparisons can help, but the all else equal part is never satisfied making the comparisons suspect).
Despite a few research attempts such as the billion price project, ‘Little Data’ and all the problems that come with it is a better description of empirical macroeconomics.
(Via Economist’s View.)