Home / General / But Austerity Is <i>Science</i>!

But Austerity Is Science!

/
/
/
0 Views

Well, well, well:

In 2010, economists Carmen Reinhart and Kenneth Rogoff released a paper, “Growth in a Time of Debt.” Their “main result is that…median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower.” Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact.

This has been one of the most cited stats in the public debate during the Great Recession. Paul Ryan’s Path to Prosperity budget states their study “found conclusive empirical evidence that [debt] exceeding 90 percent of the economy has a significant negative effect on economic growth.” The Washington Post editorial board takes it as an economic consensus view, stating that “debt-to-GDP could keep rising — and stick dangerously near the 90 percent mark that economists regard as a threat to sustainable economic growth.”

[…]

In a new paper, “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff,” Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadhseet. This allowed Herndon et al. to see how how Reinhart and Rogoff’s data was constructed.

They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don’t get their controversial result.

Dean Baker has more.

For the reasons Matt identifies — most importantly, that the causal inference drawn by Reinhart and Rogoff never made sense in the first place — it’s unlikely to matter, but it’s instructive.

UPDATE: Krugman responds to the response.

FacebookTwitterGoogle+Share
  • Facebook
  • Twitter
  • Google+
  • Linkedin
  • Pinterest
  • What’s interesting (do I want to say shocking?) is they shared their data. Did they not know it would show they pretzeled their numbers, did they not care? Too big to fail?

    To me this goes beyond Ryan used this data so it must be wrong and goes way into Wait? What? territory.

    • Hanspeter

      At this point it doesn’t matter. The zombie lie is out there. Mission accomplished.

      • firefall

        yup, their study will get endlessly cited now anyway.

        • Uncle Ebeneezer

          See: tax cuts, revenue

    • Walt

      They clearly thought they got it right. It’s weird they did their analysis in Excel — choosing the wrong cell range is really easy to do in Excel, while it’s much harder to make the analogous mistake in a statistics package.

      • Linnaeus

        Yeah, I thought something like SPSS would be de rigueur for this kind of work.

        • I wonder if they even did their own analysis. I half-expect this to be blamed on a research assistant.

        • m

          Not many economists use SPSS. Stata and, for the young guys, R are both more common.

      • Lots and lots of people use Excel. At all levels. Scientists esp. Biologists love their spreadsheets.

        I vaguely recall a survey that something like 50% of spreadsheets have a serious bug.

        My understanding is that they only released their spreadsheets when confronted with replication failure.

        • Perhaps I’ve been playing in the medical field too long but I would have expected a lot of cryin’ and lyin’ before handing over bad work. (If at all.)

        • This is relevant.

          Deriving chemosensitivity from cell lines: Forensic bioinformatics and reproducible research in high-throughput biology, Keith A. Baggerly, Kevin R. Coombes

          High-throughput biological assays such as microarrays let us ask very detailed questions about how diseases operate, and promise to let us personalize therapy. Data processing, however, is often not described well enough to allow for exact reproduction of the results…we show in five case studies that the results incorporate several simple errors that may be putting patients at risk. One theme that emerges is that the most common errors are simple (e.g., row or column offsets); conversely, it is our experience that the most simple errors are common.

        • Joshua

          I would be surprised if it was as low as 50%.

          The problem with Excel is that it was not designed for this. You can do it, but it’s really easy to mess up.

          A lot of people would rather just hack together some Excel sheet over and over than learn Access (Excel is often used as a database) or R.

          • There’s a significant cost in switching tools.

            I mean, I agree a fair bit but its nuts to ignore the ways in which Excel, esp the interface, is just a big win for loads of people.

            • Malaclypse

              Not just the interface, but the standardization. I can e-mail an Excel sheet to anybody. The trivial number that don’t have Excel can open it with OpenOffice.

              • And not just the standardisation of format but of “the language”. Lots of people have working knowledge of Excel so can use what you send them. I have R on my computer but would have to fight to use it if you sent me some code.

            • This seems like a very nice paper with nice reviews of various studies.

              A key point:

              many errors there are, not whether an error exists. These error rates, although troubling, are in line with those in programming and other human cognitive domains.

              If you factor in failure to get an analysis at all, things might be more positively skewed toward spreadsheets.

              I’m not condoning errors, but it’s a dream to think that going to stats packages eliminates errors.

              • I think the main problem is how we approach Excel. It is really easy to fill up columns and change numbers and hit an enter key in the wrong place. There’s some ability to protect yourself from your own stupidity by locking data once you’re happy with it, but nobody I know has ever mentioned such a thing and I have never seen it in action (and I’ve never done it myself). It might be because it’s such a common tool that we don’t approach it seriously. When something’s important maybe we should use a different tool just to make ourselves think about it.

                • The paper suggests that overconfidence and lack of proper design methods is core. But otoh they vastly overstate the progress we have I programming ;)

                  I think some testing protocols would go a lot way, but that’s just a guess.

                • Walt

                  Excel is error-prone in a way that any programming language or stats package is not, because it makes you do things by hand that you would do automatically in the other settings. Excel is good for ad-hoc calculations where you’re not really sure what you’re looking for, but once you’re analyzing a dataset in a uniform way, you’re better off using almost anything else. For example, R&R could have done the whole analysis in a couple of lines of SQL, and they wouldn’t have made that mistake.

                • Walt, eh.

                  I guess it’s futile to ask people to read papers I link to which actually study the phenomenon instead of containing to spout ad hoc, anecdotal, and unsupported claims while being holier than thou about statistics. But pleas, at least glance at the papers.

                  I’ll also note that the contribution of the Excel error is dwarfed by the other errors. Via RortyBomb

                  As Herndon-Ash-Pollin puts it: “A coding error in the RR working spreadsheet entirely excludes five countries, Australia, Austria, Belgium, Canada, and Denmark, from the analysis. [Reinhart-Rogoff] averaged cells in lines 30 to 44 instead of lines 30 to 49…This spreadsheet error…is responsible for a -0.3 percentage-point error in RR’s published average real GDP growth in the highest public debt/GDP category.” Belgium, in particular, has 26 years with debt-to-GDP above 90 percent, with an average growth rate of 2.6 percent (though this is only counted as one total point due to the weighting above).

                  So what do Herndon-Ash-Pollin conclude? They find “the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim].” [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.

                  But the Excel error, if I’m reading this right, only accounts for 13% or the error.

      • DrDick

        That sort of jumped out at me as well. I can see entering your data into Excel and then importing it into a stat package, but I cannot even imagine trying to do any kind of sophisticated statistical analysis in Excel.

        • Why?

          Excel (or spreadsheets) are actually pretty powerful tools and nicely interactive. They have a slew of functions as well as very could tools for moving around the data (pivot tables anyone). Etc etc

          Plus, was the analysis sophisticated? Wasn’t the error in a simple averaging?

          • Walt

            The reason is that Excel is very error-prone. I once did an analysis in R that I wanted to repeat in Excel. I couldn’t replicate it, and I spent hours trying to track down the discrepancy. It turned out it was a mistake as dumb as R&R’s.

            Another recent case was the London Whale trade where JP Morgan lost like 6 billion dollars. It turned out the risk management analysis was done in Excel, and it had a coding error.

            • Right, in any kind of typical stats package you are generally manipulating columns of data, and where you are working with subsets they are going to indexed by category variables (like Treatment vs. Placebo) so it’s harder to screw up like this.

            • Perhaps. But for a lot of people a stats package is a non starter.

              Thus it’s unsurprising that many people would use Excel.

              One I’ve my students moved to Gnuplot for graphs because she wanted a saner workflow and well it wasn’t hugely more robust, ugly hard to tweak output, a pita to get working, and terrible for exploration.

              • Walt

                Then lots of people are routinely producing wrong analyses. Excel is fine for producing graphs — and in fact in a graph it’s easier to spot that you put in the wrong data — but for any non-trivial analysis it requires an attention to detail that normal humans lack.

                • I think lots of people are producing wrong analyses whatever too, they’re using. And so?

              • (the other) Davis

                Gnuplot is a gratuitously difficult software package. In my teaching days I used it because it generated nicer graphs to display on exams than Mathematica or Maple would, but I had to relearn even the most basic commands every time I used it.

                • Thank you for saying that. I thought it was just me.

                • It really isn’t just you.

                • Use the Matplotlib wrapper (in either the Python 2.x or 3.x series). The syntax is much more humane.

            • cpinva

              so far, what all of you are describing is not problems with excel, but problems with coding or input, excel (and access) can’t help you there. for that matter, no statistical package can help you on that either. what’s required is:

              1. an initial calculation, after data input, to see if the output comes close to what you were expecting. if it’s wildly off, this should be a red flag. in this case, it was what they were hoping for, so they weren’t troubled by it.

              2. a review of the input data (time consuming but necessary), to make sure: a. everything was entered., b. it was entered correctly., and c. it was entered in the right place. a failure in any one of these will corrupt the output.

              • No, this is specifically an issue with clicking and dragging to select a subset of cells to preform an analysis on.

                It’s entirely due to how Excel handles data and indexing… in one column you can have both data, results, and headings! There is not a single other stats package I’m aware of that would handle things in such a simply ludicrous and haphazard manner.

                • In the survey of errors I didn’t seem any special problem with clicking and dragging. My guess is that is a “mechanical error” which are fewer than logic errors and more often corrected.

          • Brandon

            One of my engineering prof’s told us that Excel was the most powerful engineering tool out there.

        • spencer

          You can actually do some pretty complicated analyses in Excel if you have the right add-on package.

          • mds

            To reiterate the point of others, it’s that Excel lends itself more readily to errors, not that it lacks features.

            • cpinva

              “To reiterate the point of others, it’s that Excel lends itself more readily to errors, not that it lacks features.”

              no, it doesn’t. input/coding errors are not the application’s fault, they are the fault of the user. excel requires the same level of concentration, and testing, as any other spreadsheet/data base package.

              • Alan Tomlinson

                When the software consistently fails the end user, it is the software that should be changed, not the end user.

                Cheers,

                Alan Tomlinson

                • cpinva

                  “When the software consistently fails the end user, it is the software that should be changed, not the end user.”

                  and if that were the case with excel, i would agree with you. the fact is, it doesn’t. more often than not, the failure is the user’s fault, not the application’s. if excel were as uniformly bad as you would have us believe, it wouldn’t be so popular. there are many other spreadsheet apps out there, excel is simply one of them, and no one is forced to use it.

                  that’s not to say it’s perfect, it isn’t. however, it does help if you take the time to learn how to use it, like any other app, it isn’t going to do it for you.

                • I believe that the argument isn’t that Excel per se is especially error inducing, but that spreadsheets are more error inducing than other ways of doing statistical analysis with computers, in particular, using a statistics package (like SPSS or STATA) or programing language (like R).

                  There are several reasonable reasons to prefer them including quality of implementation of certain functions (though an add on can help Excel) or replicability or fitting into a tool chain.

                  The question is whether Excel is either inherently more error prone or de facto more error prone (e.g., because it allows relatively unskilled or untrained people into the club, or it facilitates overconfidence).

                  The evidence suggests that it’s not particularly more error prone than programming, though, of course, it might have a different bug likelihood profile than a different tool (e.g., it’s not prima facie ridiculous to think that “bump the keyboard” modification of data cells is a bigger risk in Excel since the data are “accessible” for the entire development time instead of safe in some input file). But it seems that the overall error rate is similar (in a broad way) to programming or other complex tasks.

                  This doesn’t make the use of Excel a ridiculous option for serious work, nor does using a more specialized tool mean that you’re going to get significantly better error rates. (It might be the case, but I’d like to see some data.) In some scientific communities it’s dominent (e.g., I think, biology).

                  Spreadsheets tend to attract more “end user” than professional programmers, which is Yet Another Issue.

                  Putting aside stats, spreadsheets are used for data management, i.e., as simple databases. Again, there are a slew of possible issues there and they’ll definitely crap out on you (e.g., multiuser, scalable databases :)). Often they are an awkward tool compares to e.g., a SQL database for some tasks. But mastering SQL and a SQL database is a daunting task for many people (there are studies of errors in SQL queries, which are also quite high and spawned a rich field of query interfaces such as Query By Example), so they make due (hi Mal!).

            • See the paper I link to above. Another key quote:

              When most people look at Tables 1 2, and 3, their first reaction is that such high error rates are impossible. In fact, they are not only possible. They are entirely consistent with data on human error rates from other work domains. The Human Error Website (Panko, 2005a) presents data from a number of empirical studies. Broadly speaking, when humans do simple mechanical tasks, such as typing, they make undetected errors in about 0.5% of all actions. When they do more complex logical activities, such as writing programs, the error rate rises to about 5%. These are not hard and fast numbers, because how finely one defines reported “action” will affect the error rate. However, the logical tasks used in these studies generally had about the same scope as the creation of a formula in a spreadsheet.

              The most complete set of data on error rates comes from programming, which is at least a cousin of spreadsheet development. In programming, many firms practice code inspection on program modules. In code inspection, teams of inspectors first inspect the module individually and then meet as a group to go over the module again (Fagan, 1976). Significantly, there is a requirement to report the number of errors found during code inspection. This has resulted in the publication of data from literally thousands of code inspections (Panko, 2005a). The data from these studies shows strong convergence. Code inspection usually finds errors in about 5% of all program statements after the developer has finished building and checking the module (Panko, 2005a). While there is some variation from study to study, much of this variation appears to be do to differences in programming language, module difficulty, and, sadly, in hastiness in development.

              • Walt

                Ah, but Excel requires more code, and therefore more errors. In anything else, R&R’s analysis is a couple of lines of code.

                • I’m not clear that, on average excel is more code.

                  It’s not clear to me that this spreadsheet was very large.

                  It’s also not clear to me that this particular bit of work is easier in any other language or, perhaps, easier for them.

                  But by all means carry on with your overconfidence.

                • Walt

                  You don’t actually know how to use anything else, do you?

                • Actually I do. Not that that’s relevant.

                  But, again, feel free to continue as you have been, evidence free.

                  However, if you actually do follow the links I’ve provided you might find them interesting. Or, at least, correcting.

                  (BTW, on of my favourite data spelunking tools is Panorama which does a fairly nice job of blending a database with a spreadsheet. Very slick.)

                • spencer

                  Panorama? Haven’t heard of that one … but thanks for the suggestion.

            • spencer

              To reiterate the point of others, it’s that Excel lends itself more readily to errors, not that it lacks features.

              Yeah, I can see that, but the specific comment I was replying to contained this:

              I cannot even imagine trying to do any kind of sophisticated statistical analysis in Excel.

              Hence, my reply.

              • To reiterate the point of others, it’s that Excel lends itself more readily to errors, not that it lacks features.

                Yeah, I can see that

                Except that it’s almost certainly incorrect, at least, in so far as I am able to determine from my perusal of the spreadsheet error literature.

                (Esp. if we mean errors in the final output. There may be more corrected along the way errors.)

      • It boggles my mind that there are PhD level people using Excel to do analysis. I get that it comes with Office so is sort of free… and stats software can be really expensive… but R is freely available and not especially challenging to learn if you have any experience programming (obviously quite bit harder if you have none – but then get minitab or something).

        • It really isn’t the expense for most people, it’s that programming (with text) is a big barrier for many people. Excel is more forgiving and interactive.

          I totally recognise the downsides but I’m a bit astonished that people are blind to the upsides.

          • Needing to have an explicit spreadsheet for data entry and menu driven stats ultimately limits how sophisticated your analysis can ever be but that does not mean there are no options. Minitab is basically Excel with better stats and data organization and is what a lot of university programs will use to teach basic stats. Systat is pretty similar and could be an alternative choice.

            The idea of clicking and dragging to select a set of cells to perform analysis on is something that should only be countenanced for quick and dirty stuff.

            I know this will smack of “stats elitism” or some such, but using Excel suggests that the person hasn’t had sufficient statistical training to be introduced to more serious software packages.

            • Needing to have an explicit spreadsheet for data entry and menu driven stats ultimately limits how sophisticated your analysis can ever be

              Not really, and certainly not within the limits of most desired analysis.

              Plus, people do write complicated macros and VBA apps.

              I mean, I agree in some sense. There’s also a limit on the schmanciness of the graphs it can produce. That really isn’t an interesting argument until we establish that it can’t do the things people need of it.

              The idea of clicking and dragging to select a set of cells to perform analysis on is something that should only be countenanced for quick and dirty stuff.

              I certainly don’t accept this without, y’know, an argument.

              I know this will smack of “stats elitism” or some such, but using Excel suggests that the person hasn’t had sufficient statistical training to be introduced to more serious software packages.

              Regardless of what it suggests, it’s probably incorrect or irrelevant.

              • I certainly don’t accept this without, y’know, an argument.

                The error they made is basically impossible in any normal stats package.

                Regardless of what it suggests, it’s probably incorrect or irrelevant.

                I’m never met a statistician who works in Excel, and knowledge of statistics is certainly relevant to doing correct analysis.

                • Most people doing stats are not statisticians per se.

                  As I said, from the literature I’ve seen, this error doesn’t seem dominant. And sure, perhaps it’s impossible in some systems, but that can open you up to other problems.

                • I found this interesting.

                • I found this interesting.

                  Yes, if you use what your link calls “list format” you’d avoid this error as well. Having to rearange your data from that format to do an ANOVA though is maddening.

                • DrDick

                  That is my experience as well and the reason for my comment. I know a number of people who do heavy statistical analysis (well above my meager abilities) and nobody I have ever met uses Excel if they need anything more than simple math or a mean.

              • Walt

                Dude, you’re reading about an analysis that may have cost the UK several points in GDP and years of elevated unemployment. JP Morgan list 6 billion dollars because of Excel.

        • BigHank53

          You would shit yourself if you knew how many civil engineering projects–things you drive across every day or that hold back the water upstream from you–were designed and built with nothing more sophisticated than Excel.

          • Heck, with CALCULATORS. Or slide-rules, even. Things were built before there were computers, after all.

            • True, but I would advise any PhD level person using a slide-rule to upgrade.

              • Hey, they have GREAT battery life!

              • Hogan

                You can have my abacus if you pay for the shipping.

                • cpinva

                  you can’t have mine! it always works, and i never need to replace the batteries. it is tough using it in low light though. i’ll keep my sliderule also, for the same reasons.

          • Walt

            Surprisingly, civil engineering and statistics are not the same thing, even though they both involve numbers and shit.

          • afeman

            I know of somebody who, just for the halibut, programmed a finite-difference ocean model in Excel. (western boundary current with beta, if you care)

        • sparks

          As an undergrad I was taught R, or more to the point I taught myself R from a cursory intro by a stat professor. Not the easiest (nor the hardest) to learn, but for a person who does this sort of analysis for a living I should think a must.

          • Taylor

            At my university Econ undergrads were required to teach themselves Stata. It was painful but wasn’t particularly difficult, and it’s stunning to me that there are researcher using Excel.

      • By the way, curse all of you in this thread who’ve forced I say FORCED me to defend Excel!

        • firefall

          it worked!

        • BTW, if you are interested in spreadsheet errors, this is a really nice paper (on first read).

          One thing I find very interesting about this thread is the exhibition of highly resistent cognitive blindness in people who are clearly very capable and knowledgable about data analysis. One might have naively thought that such folks would recognize the problem with either generalizing from their own experience or casual observation of a particular error (in the paper) once either a) the actual effect of the error was made clear or b) when I pointed to the literature which supports the basic idea that, overall, spreadsheets aren’t fundamentally more error prone than other mechanisms.

          (Note, that I am not, I think, overconfident of my reading. Some of it was half remembered stuff and the rest the result of some quick looking around. Obviously, I could be mistaken in a variety of ways.)

          Fun! People are wacky!

          • The article you cite above doesn’t actually seek to compare error rates between programming and spreadsheets… they just say they are “comparable” without any analysis. They aren’t comparing errors between people doing a task in a spreadsheet vs. someone doing the same in SAS or whatever… and coding in an industrial sense isn’t really all that similar to doing analysis in statistical software. Yes there is some coding that you could do incorrectly to introduce error, but it’s not the same as writing thousands and thousands of lines of code like they are talking about. The flip side is that the coding part actually gives you something to easily check and debug… unlike a spreadsheet where the formulas are all hidden from view.

            I’ll now note for the record that there appears to be an entire body of literature based on spreadsheet errors, while there does not appear to be a similar body of work dedicated to user errors in statistical software.

            • The article you cite above doesn’t actually seek to compare error rates between programming and spreadsheets… they just say they are “comparable” without any analysis.

              ?? They are similar in extent.

              They aren’t comparing errors between people doing a task in a spreadsheet vs. someone doing the same in SAS or whatever… and coding in an industrial sense isn’t really all that similar to doing analysis in statistical software.

              This is true, and a worthwhile point. I didn’t find any direct comparisons, alas. Indeed, there’s a prima facie case to be made on two fronts:

              1) That for a particular class of models, one mechanism is more error prone than others.

              2) (The walt move), if we could show that Excel is more verbose for the same task than other mechanisms, then comparable error rates would produce different amounts of error.

              Those are reasonable moves. We don’t have data (at least I’ve not immediately found any) to compare. If I dig out the SQL papers, I think they’ll show similar stuff, though.

              (Other reasonable moves are the overconfidence one and the “more naive users” one and the “harder to use filters out incompetants” one. The last perhaps isn’t so good because errors rates in spreadsheets seem indifferent to expertise (as with programming).)

              Yes there is some coding that you could do incorrectly to introduce error, but it’s not the same as writing thousands and thousands of lines of code like they are talking about.

              Error rates per KLOC are quite stable and not something that kicks in at high levels. Even short pieces of code can have surprising numbers of errors.

              The flip side is that the coding part actually gives you something to easily check and debug… unlike a spreadsheet where the formulas are all hidden from view.

              This seems quite odd to me. These papers discuss things like code review of spreadsheets. You can certainly inspect the formula in a spreadsheet. Do you mean the whole chain of formulae are not simultaneously visible? But is that “easy to check and debug”? How so?

              The experiment would be easy to extend to stat package users as the test modle is so simple. One might have a bit more trouble recruiting participants, but, as I think you said, lots of econ people learn standard stat packages, so it shouldn’t be TOO hard.

              I’ll now note for the record that there appears to be an entire body of literature based on spreadsheet errors, while there does not appear to be a similar body of work dedicated to user errors in statistical software.

              I’m a little surprised that you would make such a specious argument. But if it makes you feel good, ok!

              (I.e., spreadsheet use is certainly orders of magnitude larger and more widespread than stats package use, and even then, the student of spreadsheet errors is still quite a young field.

              And of course, these studies aren’t picking out use of spreadsheets for statistical user per se. So perhaps the error rates are lower there? I sorta doubt it, but it’s at least possible.)

              Oh, but here

              Of 14 surveys of statistical errors found in the medical literature from 1960 to 1993, statistical error rates range from 31% to 90% and are typically around 50% (see Fig. I).

              I’m not finding (immediately) a lot of immediately useful and easy to summarize stuff, but one thing seems clear, “logic” errors (i.e., using the wrong test, using the test wrongly, bugs in the model, etc.) seem fairly common. Those also happen to be the more common spreadsheet errors. That’s suggestive, at least.

              I don’t insist that spreadsheet’s don’t induce certain classes of errors or even that they aren’t more error prone than other tools. I just note that they don’t seem *exceptionally* error prone and the sorts of errors found aren’t the ones, afaict, y’all claim for them. So, I don’t know where y’all get your confidence that Excel is inherently a clusterfuck for (all) statistical work such that anyone who has a PhD should hang their head in shame for daring to use it.

              • And none of this is to disparage stats packages or (esp.) R which I think are super cool. I love databases as well! Etc. etc. I would certainly argue that for lots of tasks, that something like R is the right tool, but also that you want someone who is a professional R programmer to use it. And even then, don’t get overconfident.

              • I don’t insist that spreadsheet’s don’t induce certain classes of errors or even that they aren’t more error prone than other tools. I just note that they don’t seem *exceptionally* error prone and the sorts of errors found aren’t the ones, afaict, y’all claim for them. So, I don’t know where y’all get your confidence that Excel is inherently a clusterfuck for (all) statistical work such that anyone who has a PhD should hang their head in shame for daring to use it.

                We’ve all said that it is prone to a certain type of error… an error in indexing, that is very hard to make in software specifically designed for statistics and analysis. It doesn’t take empirical data to see this, simply experience and common sense.

                However, since you seem to be obsessed with the concept that spreadsheets are no more error prone than aything else: if you look at this paper you will see errors broken down in to type. Note that fully 1/3rd of all errors were due to referencing the wrong cells.

                We do not have comparable data for stats packages, but the simple fact is that TTEST(A11:A22,B17:28,2,1) is more prone to error than simply ttest(data(:,1),data(:,2)) since I don’t have to worry about mis-indexing the rows.

                You can certainly inspect the formula in a spreadsheet. Do you mean the whole chain of formulae are not simultaneously visible? But is that “easy to check and debug”? How so?

                It’s simply easier to debug code that is separate from data. I can see the sequence of actions performed on the data clearly. I can easily execute them in series and see where the problem is.

                This is not true of spreadsheets, where data and code occupy the same space. How do I know what your spreadsheet does if you aren’t there to tell me? Click on every box and see if it’s a formula or data? Don’t you see how that’s a wee bit more difficult to parse than code?

                • We’ve all said that it is prone to a certain type of error… an error in indexing, that is very hard to make in software specifically designed for statistics and analysis. It doesn’t take empirical data to see this, simply experience and common sense.

                  Ah, yes. An Appeal to Experience and Common Sense. I feel duly chastised even before your snark below!

                  But look, even granted that doesn’t make it bonkers to use Excel for stats. As I pointed out, it might be for some class of users they never manage to produce an R script for their model. So their choices is a spreadsheet or nothing. Similarly, it’s possible to trade one class of errors for another. Furthermore, it might be that such errors are easily correctable and actually corrected.

                  However, since you seem to be obsessed with the concept that spreadsheets are no more error prone than aything else:

                  A fairer characterisation is that I prefer evidence driven discussion. Which will be no surprise to people familiar with my comments :)

                  if you look at this paper you will see errors broken down in to type. Note that fully 1/3rd of all errors were due to referencing the wrong cells.

                  Thanks for the link! A very interesting paper. It’s way late so I won’t have time to read it carefully, but a quick skim reveals a few interesting points. So, there’s your point:

                  if you look at this paper you will see errors broken down in to type. Note that fully 1/3rd of all errors were due to referencing the wrong cells.

                  but also:

                  We began this article by noting that the
                  received wisdom on spreadsheet errors maintains that errors average 5% of cells and this rate is consistent across spreadsheets
                  . Our results give a very different impression.

                  The average cell error rate appears to be closer to 1% or 2%, depending on how errors are defined.

                  Perhaps even more important, roughly half of the spreadsheets we tested had rates below these levels, although a few had astonishingly high rates. These results suggest that errors are not a constant factor, but presumably depend on the situation being modeled and the developer’s skills. Finally, we document for the first time in the published literature the sources of errors. Thirty to forty percent of errors are due to embedding numbers in formulas. The next most common error type is reference errors

                  and

                  Errors in spreadsheet data and formulas
                  are not the only possible causes of errors in spreadsheet use. In fact, many of the press accounts reported by EUSPRIG involve misuse of spreadsheets. For example, spreadsheet results can be sorted incorrectly, or an out-of-date version of a spreadsheet may be used. More generally, poor decisions based on spreadsheet models can arise by modeling the wrong problem or by misinterpreting the results.

                  Also, I’ll know that the errors were uncovered by using two auditing/debugging tools. Which suggests a level of debuggability.

                  But, by all means, cherry pick. Note that independently of your post here, I also posted a comment which works against my earlier comments. Which suggests that I’m more obsessed with finding out the truth than pushing my line.

                  Unfortunately, thus far, I don’t have much data against your focus on pushing a line :)

                  We do not have comparable data for stats packages, but the simple fact is that TTEST(A11:A22,B17:28,2,1) is more prone to error than simply ttest(data(:,1),data(:,2)) since I don’t have to worry about mis-indexing the rows.

                  Well, yes, this is plausible. But note in the paper I discuss below, errors happy outside of Excel due to poor choice of label names and confusions that follow from that. And it’s certainly possible to name ranges in Excel, which should help some.

                  It’s simply easier to debug code that is separate from data. I can see the sequence of actions performed on the data clearly. I can easily execute them in series and see where the problem is.

                  Well, maybe? I certainly see the appeal, but, in point of fact, debugging code is hard. Tracing through data is hard. I don’t see the relationship between the source data and the results “immediately”. (I have to *trace* through, which either involves writeln esque stuff or using a symbolic debugger; which is not easy). Now stats packages might make this easier in their UIs.

                  I certainly can believe that if you already are skilled at programming that the skills transfer over. But that’s a rather different claim.

                  This is not true of spreadsheets, where data and code occupy the same space. How do I know what your spreadsheet does if you aren’t there to tell me?

                  Er…but if we’re talking about development of spreadsheets, surely the more typical cases is that the developer is examining/debugging the spreadsheet.

                  Click on every box and see if it’s a formula or data? Don’t you see how that’s a wee bit more difficult to parse than code?

                  I’m not sure what’s driving your condescending tone. Just FWI, I’m pretty familiar with coding in several modalities. I wouldn’t consider myself a super programmer or anything (certainly not at the coding level), but I’ve worked with amazing ones and I study, professionally, various aspects of people producing models (though usually logic based models). I don’t say this because my status makes me right, but in the hope that part of our misconnect is that you think I don’t know what I’m talking about or are unfamiliar with development practices. I am, both experientially and from the literature.

                  So, in order to make a reasonable comparison, we’d need to compare like things. It’s not clear to me at all that clicking on cells is a harder task for many users than reading a bit of code. Reading code is hard. Horizontal scrolling in the Excel entry bar is also wretched. To determine how all the factors cache out, we do need to actually study what’s going on.

                  But whatever. I suggest you write me off entirely. I’m definitely not going to be convinced by “experience and common sense”. It’s an interesting area and clearly we need a lot more research to understand what’s going on.

                • I’m not trying to be condescending, so I’m sorry you took it that way… but the issue here is that we don’t really have any evidence to address the question at hand. We have lots of papers examining spreadsheet error rates but none that compare spreadsheets to any other method… be it SAS, abacus, or slide-rule.

                  So you either have to rely on experience and theory or say that we don’t really know how the rate of spreadsheet error compares to other methods and there is nothing left to discuss until somebody does the study.

            • JW, you’re a biomed stats guy, perhaps you can get a better grip on the paper linked to here.

              It’s clear that there are some Excel files in the mix, but not only and it’s not clear how that fits in with the analyses.

              • Up! There’s clear discussion in the conclusion.

                On the nature of common errors. In all of the case studies examined above, forensic reconstruction identifies errors that are hidden by poor documentation. Unfortunately, these case studies are illustrative, not exhaustive; further problems similar to the ones detailed above are described in the supplementary reports. The case studies also share other commonalities. In particular, they illustrate that the most common problems are simple: for example, confounding in the experimental design (all TET before all FEC), mixing up the gene labels (off-by-one errors), and mixing up the group labels (sensitive/resistant); most of these mixups involve simple switches or offsets. These mistakes are easy to make, particularly if working with Excel or if working with 0/1 labels instead of names (as with binreg). We have encountered these and like problems before. As part of the 2002 Competitive Analysis of Microarray Data (CAMDA) competition, Stivers et al. (2003) identified and corrected a mixup in annotation affecting roughly a third of the data which was driven by a simple one-cell deletion from an Excel file coupled with an inappropriate shifting up of all values in the affected column only. Baggerly, Morris and Coombes (2004), Baggerly et al. (2004) and Baggerly et al. (2005) describe cases of complete confounding leading to optimistic predictions for proteomic experiments. Baggerly, Coombes and Neeley (2008) describe another array study where there was a mixup in attaching sample labels to columns of quantifications, most likely driven by omission of 3 CEL files leading to an off-by-three error affecting most of the names. These experiences and others make us worry about the dual of the italicized statement above, that the most simple problems may be common.

                7.4. What we’re doing. Partially in response to the examples discussed here, we instituted new operating procedures within our group, mostly simple things having to do with report structure. Reports in our group are typically produced by teams of statistical analysts and faculty members, and issued to our biological collaborators. We now require most of our reports to be written using Sweave [Leisch (2002)], a literate programming combination of LATEX source and R [R Development Core Team (2008)] code (SASweave and odfWeave are also available) so that we can rerun the reports as needed and get the same results. Some typical reports are shown in the supplementary material. Most of these reports are written by the statistical analysts, and read over (and in some cases rerun) by the faculty members. All reports include an explicit call to sessionInfo to list libraries and versions used in the analysis. The working directory and the location of the raw data are also explicitly specified (in some cases leading to raw data being moved from personal machines to shared drives). We also check for the most common types of errors, which are frequently introduced by some severing of data from its associated annotation [e.g., using 0 or 1 for sensitive or resistant instead of using names (noted above), supplying one matrix of data and another of annotation without an explicit joining feature, calls to order one column of a set]. R’s ability to let us use row and column labels which it maintains through various data transformations helps. These steps have improved reproducibility markedly.

                Interesting and suggestive. I’ll note that it’s not just the shift in tool (excel to R) but the whole workflow modification (which is, indeed, probably easier with R as a component than Excel…though, it’s certainly not impossible to do something similar in Word/Excel).

                Interesting.

                • The most significant error they cite is simply poor practice… you could easily write an R script that was an undocumented hot mess (and I have!)… however something like Excel is more prone to this because it is informal by nature. That is what attracts people to it in the first place. Programmers know that commenting your code is critical even to understanding what you did 6 months ago… I don’t think the average Excel user does.

                  The other part is what I mention above: having code separate from data. You basically never touch your original data in something like R. It sits in a file somewhere and remains pristine for all time. All transformations are saved as new variables so you can always go back and check. With Excel I fat finger something and all of a sudden all of my data is shifted by 1.

  • ADHDJ

    Actually, the spreadsheet adds up fine if you make sure and enable the “Nate Silver is a Witch” option in Excel.

    • Malaclypse

      Yes, but Microsoft didn’t enable that until Office 2010. Prior to that, you had to make due with the “I wish Krugman would stop being potitical” macro.

      • Malaclypse

        “political,” that is. I blame social liberalism for the error.

        • Scott Lemieux

          Don’t forget expansive readings of the commerce clause.

        • Warren Terra

          So what gets the blame for “make due”, if we’re finding deeper meaning in typos?

        • Hogan

          But you’re OK with “make due”?

          Actually, you’re an accountant, it’s tax time, no points off for having dueness on your mind.

          • Malaclypse

            I’m on a fiscal year, and we outsource taxes anyway. I have no excuse, and am abashed and ashamed.

          • cpinva

            “Actually, you’re an accountant, it’s tax time, no points off for having dueness on your mind.”

            all dueness, for calendar year filers, passed at midnight last night. next will be the extension deadlines, and first quarter financials.

      • Potitical. Hur.

      • DrS

        “Potitical”

        /titters

        Yes, I’m 12

    • cpinva

      i believe that feature is only available in service pack 2, of office 2010.

      • Linnaeus

        Hell, I’m still running Excel 2007. On XP.

    • UserGoogol

      For what it’s worth, Nate Silver is on the record as saying that for computational software he uses “Stata for anything hardcore and Excel for the rest.”

  • John Protevi

    For once, the ICP is right on: Fucking Microsoft. How does it work?

    • DrDick

      Not very well most of the time.

      • Kurzleg

        Excel works great when it isn’t crashing. I use it at work, and at least once per week it’ll crash. It’s usually when I’ve got 5-10 spreadsheets open, and none of them are particularly big files. In fairness, Office 2010 does a pretty good job of recovering the files, but the fact that it crashes as often as it does is irritating.

        • Malaclypse

          Interesting. I normally run upwards of 20, and pretty much never crash. I’m running 2007, FWIW.

        • cpinva

          i’ve never had excel itself crash on me, ever, regardless of version. i have had RAM issues, resulting in it freezing, in mid-comp. again, that isn’t an excel problem, it’s a hardware problem. not to be pimping MS office, but for me, i’ve never really had problems, like those described on this thread, with any part of it, that weren’t of my own, or someone else’s making.

        • DrDick

          I use Excel for my grade books and a few other functions and it is fine for that. I just have issues with Microsoft.

          • Use this instead:

            http://www.libreoffice.org/

            • DrDick

              At this point, I am used to working around the idiocies of Microsoft and it is what my university instals on my computer. I can also get the most recent Office Suite under license for $10 in the bookstore. Life is easier when you do not have to translate back and forth.

    • Chilly

      I always thought PowerPoint was the only software capable of this kind of disaster.

      http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0001yB

      • cpinva

        the biggest problem with powerpoint, is that it replaces a boring & tedious lecture, with a boring & tedious slideshow. being boring & tedious, in multi-media, is still boring & tedious.

        • I use powerpoint strictly for my collection of cool images. No text never ever. Except a list of key terms at the start.

        • I…am…resisting.

          Oh, fuck it.

          PowerPoint/Presentation, even used poorly, can be rather helpful, particularly for non-native speakers. At least, this is my experience.

          I do curse the fact that in computer science, I basically have to use slides for everything. Contrariwise, in philosophy, they still insist on reading papers aloud, so…

  • gman

    Pete Peterson got what paid for in sponsoring. Zombie is out there..never to be killed.

  • comptr0ller
    • rm

      It looks like you’re writing an evil master plan to destroy the world. Would you like help with that?

      • Stan Gable

        That’s awesome.

  • Cody

    Sadly, conservative science can be countered by mere facts, it’s all about the tone you say them in (and the massive payouts to the wealthy).

    Personally, I’m looking forward to the defense. I’m sure just excluding every country that had high debt and average or better growth doesn’t skew their results at all!*

    *Can we wager on which publication comes out with this argument first?

    • Malaclypse

      McArdle, in the Atlantic. And she’ll link to Tyler Cowen.

      • Malaclypse

        And I swear I didn’t look at MR before I posted my snark.

        • sharculese

          In the blogosphere, the ratio of blog posts “attacking austerity” to “proposing constructive alternatives to austerity” is at least ten to one. That too tells you something.

          the fuck?

          • Malaclypse

            I believe he is saying that he may have been wrong, but hippies still smell.

            • mds

              That too tells you something.

              “I mean, sure, some of our go-to guys for legitimizing our preconceived policy preferences have been caught cherry-picking the data and demonstrating their blundering incompetence with a spreadsheet program. But what’s the other side’s alternative? Besides decades of unfiddled empirical data and ample present-day comparisons, I mean?”

          • John Protevi

            Evidently Keynesians and other stimulus fans don’t count as “constructive.” Sigh.

            • joe from Lowell

              If he’s not a Keynesian, how come he won’t release his birth certificate?

              • mds

                Harumph.

              • John Protevi

                LOL

              • spencer

                *golf clap*

          • Joshua

            All serious people want more austerity, and they only listen to serious people.

          • Scott Lemieux

            proposing constructive alternatives to austerity

            Who says we need one?

            It’s like really dumb defenses of electoral vote splitting. “Well, what’s your plan for getting 70 social democrats in the Senate and Bob Avakian in the White House?”

        • Guy whose research center cherry picked data–didn’t nclude reproductive rights in its “freedom index”–finding excuses to be unbothered that cherry picking data compromised research that supports his ideological perspective.

          Also breaking, dog bites man, Krugman is shrill, and Jennifer Rubin is a horrible, horrible human being.

          • Also, chess geeks stick with chess geeks.

            • rea

              Rogoff is tremendously stronger at chess than Cowen–there’s no comparison. Grandmster Rogoff drew Magnus Carlson (winner just-held candiates tournament) last year. Cowen briefly had a master rating back in th 70s’.

          • mds

            See also: Austerity saved the Irish economy! In your faces, Keynesians!

      • Malaclypse

        And while I forgot McArdle moved, she wrote about Reinhart, approvingly, yesterday.

  • afeman

    Can somebody with some background in some combination of econ and policy vouch for how influential this one single paper is supposed to be? As in, is policy over major economic areas really being steered on the basis of *one* *single* *paper*, the way a lot of econ bloggers are suggesting?

    • Linnaeus

      It’s not just one paper, it’s several papers and a book, but the central idea about growth and debt-to-GDP is the same. Dean Baker mentions it here.

      • afeman

        The discussion at Rortybomb suggests that it wasn’t peer reviewed, but the authors’ website list many nonspecialist cites. Which sounds about right.

        I work in atmospheric science and getting policy traction based on decades of research, thousands of papers from multiple disciplines, the best-established physics since Newton, and, well, you fill in the rest.

    • IM

      As in, is policy over major economic areas really being steered on the basis of *one* *single* *paper*, the way a lot of econ bloggers are suggesting?

      I wondered about that too. At lest here in Europe the austerity preference long predated this paper.

  • Anonymous

    Afeman: Paul Ryan and the Washington Post cited it prominently. That’s roughly 2x more influence than most economists have their entire careers.

    My question: assuming Ryan and the WaPo didn’t read the actual paper but were fed the info by a lobbyist (Peterson Fdn.?) will said lobbyist now renounce acknowledge these new findings? [cynical jokes expected]

    • Mary Rosh

      I happen to know both of authors of this paper quite well, and their work is accurate in every respect.

      • Kurzleg

        Except for the parts that aren’t accurate…

        • Hogan

          Like the data. And the math. And the conclusions.

        • Craigo

          I’m the guy who explains the joke.

          • Kurzleg

            I’d forgotten about that.

          • spencer

            I guess it’s because I was in an econ program when that story broke, but I’m always a bit surprised to find people who don’t know the name Mary Rosh – especially on a site that is aware of all Internet traditions.

            • Sprezzatura

              Only a monster like John Lott would use a sock puppet. Now take a guy like Lee Siegel: he’s got no need for one.

      • firefall

        thats a lott of work

      • Arnold Harvey

        Mary, I understand you will be publishing a new paper with them, adding many new data points but with the results the same to at least the third decimal point

        • firefall

          I believe she prefers to be addressed as Msscribe now

          • Murc

            + 1

        • Cyril Burt

          Let me introduce my two able assistants, Margaret Howard and Jane Conway.

          • Hogan

            Cyril Burt! That’s kickin it old school. If that school were any older it would be dead.

    • cpinva

      “will said lobbyist now renounce acknowledge these new findings?”

      ummmmmmmmmmmmmmmmmmmm, no. and why should they? they’ve been doubling & tripling down on the “lowering taxes increases revenues” k-mart theory for 30 years now, historical evidence of the opposite notwithstanding. my guess is that they’ll drag hayek’s corpse out to defend it.

  • Sly

    The math proves* that government spending made the Great Depression worse!

    *If you don’t include the years 1933 through 1937 and 1941 through 1944.

    • somethingblue

      The Depression was actually an era of great prosperity, with notably rare exceptions.

    • If only we could find the single excel error that would drive a stake in the heart of Amity Shlaes career.

  • jkay

    It’s really cold fusion at work – true – making an obvious mistake was the only way cold fusion could be made to work, like this.

    Because Republican “austerity” worked so well it’s put us amazingly in debt, just like the Ryan plan would….

    • Brutusettu

      Jimmy Carter convinces 3rd worlders that their myths with water wells isn’t 100% accurate and Carter helps save lives.

      Gore tries to convince 1st worlders that carbon is part of one of the greenhouse gases and Al Gore is fat so austerity’s confidence fairy works, John Shimkus is right.

  • Meister Brau

    Rogoff et al. respond here:

    http://www.businessinsider.com/reinhart-and-rogoff-respond-to-critique-2013-4

    In the natural sciences, we call this type of handwaving working the editor, or an attempt at inducing reviewer fatigue.

    • mds

      we call this type of handwaving working the editor

      Well, why should they stop now?

      Given my even-shittier-than-usual mood, I really don’t want to get out of the boat. Is there some especially creative way in which they airily dismiss omitting data that contradicted their hypothesis?

      • Walt

        They argue that it confirms their results, because the new paper still finds that economic growth drops above 90%. (It goes from 3% to 2%. The original R&R paper found that above 90% it was negative.)

        • Marc

          And they ignore the devastating points about how they average (new zealand equals the US in weight, for example); that they excluded years in countries that didn’t fit their hypothesis.

          This is really bad stuff, and the response is weak.

          • Scanner

            For the draft of this biology paper I’m not-writing right now, I’m going line by line justifying whether each sentence belongs, which inspires this question:

            what exactly were R&R thinking was remotely useful or illuminating about including the mean in that manner? They can’t handwave away that the 2010 median is vaguely close to the 2013 corrected mean, or that they went back and fixed everything in 2012. What was the idea behind that original mean? Unless they explain that, it reads an awful lot like they devised a misleading statistic to be flashy, and tacked on the median in order to maintain a pretense of academic credibility.

  • wengler

    It’s the same factor at work in having some Global Warming denialist present findings. They don’t have to be true, they just have to be referenceable.

    See also: Ann Coulter’s voluminous footnotes.

    • sparks

      When I first saw the error here, I was immediately reminded of some of the papers arguing against AGW, and certain comments by people of lesser ability in stats which show up at WUWT.

  • Major Kong

    Austerity can never fail, it can only be failed.

  • Manju

    I’m not understanding why the original paper is that unnerving. Austerity is part of Keynesian Econ. Thats what you do once you’re close to full-employment. Austerity v Stimulus is a matter of when, not if.

    Did the researchers claim that crossing the 90% threshold slows growth during a recession? They appear to be talking about long-term debt overhangs (20 year avg) so that doesn’t seem to be the case.

    I think a Keynesian would expect borrowing & spending (and for that matter cutting taxes) to correlate to slower growth during normal times. Thats what GW Bush did.

    • OmerosPeanut

      It’s the unstated conclusion that austerity is the correct course when debt reaches 90% of GDP regardless of employment or GDP growth levels. In other words, the paper’s role as intellectual cover for the “Austerity now!” movement that swept Europe and has more recently dominated our domestic debate.

  • OmerosPeanut

    When I assume my conclusion, after exhaustive modeling of the data I am able to conclude my assumption.

  • Malaclypse

    “When I use a data set,” Reinhart and Rogoff said in rather a scornful tone, “it means just what we choose it to mean — neither more nor less.”
    “The question is,” said Alice, “whether you can make data mean so many wrong things.”
    “The question is,” said Reinhart and Rogoff, “which is to be master – – that’s all.”

  • rea

    “The question is,” said Reinhart and Rogoff, “which is to be master grandmaster.”

    • bobbyp

      But they’re only playing 2 dimensional chess. (please, no snarky replies along the lines of, “What other kind is there?”)

  • Murc

    You know, when I saw this had gotten to 150+ posts very quickly my first thought was “oh, lord. It’s a combination of trolls and genuine conservatives, isn’t it. Did we get linked somewhere disgusting again?”

    I was pleasantly surprised to see a long and very informative discussion on the virtues and flaws of Excel.

    • If we could just avoid talking about how Excel Totally Believes In Austerity and is Only Pretending to Be Left Wing, we’ll have a success!

  • Loud Liberal

    What I’d like to know is how Rheinhart and Rogoff define growth. Because, if it does not mean rising middle class income, I don’t give a shit about growth.

  • Inexpensive Snapback Hats For Sale One particularamong the most commonsignsamong the most common For allall of them areall arethey all areeach of them isall of them aremost of them are When looking forare you looking forare you searching forare you looking fordo you needtrying to findsearching for caps that are stylish and inall over theall around thethroughout theeverywhere in theeverywhere over theonon thein youruponall over the manner, snapback caps are viewedlooked overlooked overchecked outexaminedgradedchecked being the mostby far the mostprobably the mostby far the mostquite possibly the mostjust about the mostessentially the mostthe foremostthe best popular variety.snapbacksbrand2013comment Everybody isis the fact thatis always thatis the factwould be the factis the fact thatis that oftenis the reason thattends to be thatis usually thatis becauseis thathas to be thatwill be theusuallymay be thecould be theis alwaysis this oftenis this sporting thesea few of thesesome of thesemany of thesemost of thesesomea few of theseexamples of these kinds ofhaving to do withrelating toregardingconcerningto do withabouthaving to do with motorcycle helmet. Sportsmen, rappers, performers, staff fans and everybodyevery man and womanevery man and woman that willnot only can theynot only can they keepdraw attention away fromdraw attention away from throughoutall the way throughall the way throughcompletelyall throughthroughright throughduringfrom top to bottomfrom start to finishall over track withso that you haveallowing you to haveso that you haveallowing an individualto haveproviding some oneleaving their fashion offershas to offer youhas to offerhas to offer you onea minumum of oneno less than oneone or morea minimum of onesome form ofa minumum of onemore then onean oror at leastor at bestor otherwiseat leastor perhapswell or at leastor evenand morea good deal morea good deal morea lot morea great deal moresignificantly moremuch more snapback loath.

It is main inner container footer text