Saturday, June 28, 2014

Linear Probability Models for Skewed Distributions with High Mass Points

There are a lot of methods discussed in the literature related to modeling skewed distributions with high mass points including log transformations, two part models,  GLM etc. In some previous posts I have discussed linear probability models in the context of causal inference.  I've also discussed the use of quantile regression as a strategy to model highly skewed continuous and count data. Mullahy (2009) alludes to the use of quantile regression as well:

"Such concerns should translate into empirical strategies that target the high-end parameters of particular interest, e.g. models for Prob(y ≥ k | x) or quantile regression models"

The focus on high end parameters  using linear probability models is mentioned in Angrist and Pischke (2009) :

"COP [conditional-on-positive] effects are sometimes motivated by a researcher's sense that when the outcome distribution has a mass point-that is, when it piles up on a particular value, such as zero-or has a heavily skewed distribution, or both, then an analysis of effects on averages misses something. Analysis of effects on averages indeed miss some things, such as changes in the probability of specific values or a shift in quantiles away from the median. But why not look at these distribution effects directly? Distribution outcomes include the likelihood that annual medical expenditures exceed zero, 100 dollars, 200 dollars, and so on. In other words, put 1[Yi > c] for different choices of c on the left hand side of the regression of interest...the idea of looking directly at distribution effects with linear probability models is illustrated by Angrist (2001),...Alternatively, if quantiles provide a focal point, we can use quantile regressions to model them."

References:

Mostly Harmless Econometrics. Angrist and Pischke. 2009

Angrist, J.D. Estimation of Limited Dependent Variable Models With Dummy Endogenous Regressors: Simple Strategies for Empirical Practice. Journal of Business & Economic Statistics January 2001, Vol. 19, No. 1.

ECONOMETRIC MODELING OF HEALTH CARE COSTS AND EXPENDITURES: A SURVEY OF ANALTICAL ISSUES AND RELATED POLICY CONSIDERATIONS
John Mullahy Univ. of Wisconsin-Madison
January 2009


Friday, June 27, 2014

Is distance a proxy for pesticide exposure and is it related to ASD? Some thoughts...


Recently a paper has made some headlines, and the message getting out seems to be that living near a farm field where there has been pesticide applications has been found to increase the risk of Autism spectrum disorder. A few things about the paper. First, one of the things I admire about econometric work is the attempt to make use of some data set, some variable, or some measurement to estimate the effect of some intervention or policy, in a world where we can’t always get our hands on the thing we are really trying to measure. The book Freakonomics comes to mind, or quasi-experimental designs and the use of instrumental variables.

Second, I’m not an epidemiologist, toxicologist, entemologist, or have a background in medicine or public health. I don’t have the subject matter expertise to critique this article, but I can express my appreciation for the statistical methods that they used. While the authors could not  (or simply did not) actually measure pesticide exposure in any medical or biological sense, they attempted to infer that distance from an agricultural field might correlate well enough to proxy for exposure. That is a large assumption and perhaps one of the greatest challenges of the study. It is not a study on actual exposure. So I’ll  try to only refer to exposure from this point in quotes.  But the authors did make clever use of some interesting data sources. They matched up required reported pesticide applications and report dates with zipcodes of the study respondents and reported pregnancy stages to determine distance from application and at what point of their pregnancy they were exposed.  They reported distance in three bands  or buffer zones of 1.25, 1.5, & 1.75 km   This was actually nice work, if distance could be equated to some known level of exposure. Unfortunately, while they cited some other work attempting to tie exposure to ASD, I did not see a citation in the body of the text where any work had been done justifying the use of distance as a proxy, or those particular bands. More on this later. They also attempted to control for a number of confounders, applied survey weighting to ‘weight up’ the effects to reflect the parent population, and in addition, at least based on my reading, may have even tried to control for some level of selection bias by using IPTW regression with SAS.

Discussion of Results

There were at least four major findings in the paper:

(1) Proximity to organophosphates at some point during gestation was associated with a 60% increased risk for ASD

(2) higher for 3rd trimester exposures [OR = 2.0, 95% confidence interval (CI) = (1.1, 3.6)],

(3) and 2nd trimester chlorpyrifos applications: OR = 3.3 [95% CI = (1.5, 7.4)].

(4)Children of mothers residing near pyrethroid insecticide applications just prior to conception or during 3rd trimester were at greater risk for both ASD and DD, with OR's ranging from 1.7 to 2.3.

So where do we go with these results? First off all of these findings are based on odds ratios. The reported odds ratio in the first finding above was 1.60 which implies a [1.6-1.0]*100 = 60% increase in odds of ASD for ‘exposed’ vs ‘non-exposed’ children. This is an increase in odds, and does not have the exact same interpretation as an increase in probability. (see more about logistic regression and odds ratios here). Some might read the headline and walk away with the wrong idea that living within proximity of farm fields with organophosphate applications constitutes  ‘exposure’ to organophosphates  and is associated with a 60% increased probability of ASD, but that is stacking one large assumption on top of another misinterpretation.

However, these findings are but a slice of the full results reported in the paper. Table 3 reports a number of findings across the distance bands, types of pesticide, and pregnancy stage. One thing about odds ratios, an odds ratio of ‘1’ implies no effect. The vast majority of these findings were associated with odds ratios with 95% confidence intervals containing 1, or very very close to 1. For those that like to interpret p-values, a 95% CI for an odds ratio that contains 1 implies that the estimated regression coefficient in the model has a p-value > .05, i.e. non-significant results.

Another interesting thing about the table, is that there doesn’t seem to be any pattern of distance/pregnancy stage/chemistry associated with the estimated effects or odds ratios. A point made well in a recent blog post regarding this study at scienceblogs.com here.

Sensitivity

From the paper: “In additional analyses, we evaluated the sensitivity of the estimates to the choice of buffer size, using 4 additional sizes between 1 and 2km: results and interpretation remained stable (data not shown).”

That’s unfortunate too. Given the previous discussion of odds ratios, lack of empirical support or literature related to using distance as a proxy for exposure, you would think more sensitivity analysis would be merited to show robustness to all of these assumptions even if and especially if there is no previous precedent in the literature related to distance.  This in combination with the previous discussion regarding the large number of insignificant odds ratios and select reporting of the marginally significant results is probably what fueled accusations of data drudging.

Omitted Controls

From the Paper: “Primarily, our exposure estimation approach does not encompass all potential sources of exposure to each of these compounds: among them external non-agricultural sources (e.g. institutional use, such as around schools); residential indoor use; professional pesticide application in or around the home for gardening, landscaping or other pest control; as well as dietary sources (Morgan 2012).”

So, there are a number of important routes of exposure that were not controlled for, or perhaps a good deal of omitted variable bias and unobserved heterogeneity.  The point of my post is not to pick apart a study linking pesticides to ASD. There are no perfect data sets and no perfect experimental designs. All studies have weaknesses, and my interpretation of this study certainly has flaws. The point is, while this study has made some headlines with some media outlets, and seems scary; it is not one that should be used to draw sharp conclusions or to run to your legislator for new regulations.
This reminds me of a quote I have shared here recently:
"Social scientists and policymakers alike seem driven to draw sharp conclusions, even when these can be generated only by imposing much stronger assumptions than can be defended. We need to develop a greater tolerance for ambiguity. We must face up to the fact that we cannot answer all of the questions that we ask." (Manski, 1995)

References:
Manski, C.F. 1995. Identification Problems in the Social Sciences. Cambridge: Harvard University Press.

Neurodevelopmental Disorders and Prenatal Residential Proximity to Agricultural Pesticides: The CHARGE Study
Janie F. Shelton, Estella M. Geraghty, Daniel J. Tancredi, Lora D. Delwiche, Rebecca J. Schmidt, Beate Ritz, Robin L. Hansen, and Irva Hertz-Picciotto
Environmental Health Perspectives.   June 23, 2014

Sunday, June 15, 2014

Big Ag and Big Data | Marc F. Bellemare




A very good post about big data in general and applications in agriculture specifically by Marc Bellemere can be found here:

http://marcfbellemare.com/wordpress/2014/06/big-ag-and-big-data/#comment-40620

He clears up a misconception that I've talked about before, where some gainsay big data because it doesn't solve all of the fundamental issues of causal inference.

The promises of big data were never about causal inference. The promise of big data is prediction:

"There is a fundamental difference between estimating causal relationships and forecasting. The former requires a research design in which X is plausibly exogenous to Y. The latter only requires that X include as much stuff as possible."

"When it comes to forecasting, big data is unbeatable. With an ever larger number of observations and variables, it should become very easy to forecast all kinds of things …"
"But when it comes to doing science, big data is dumb. It is only when we think carefully about the research design required to answer the question "Does X cause Y?" that we know which data to collect, and how much of them. The trend in the social sciences over the last 20 years has been toward identifying causal relationships, and away from observational data — big or not."
He goes on to that end to discuss how big data is being leveraged in food production, and shares a point of enthusiasm that I think is reveals an important point that I have made before regarding the convergence of big data, technology, and genomics

"This is exactly the kind of innovation that makes me so optimistic about the future of food and that makes me think the neo-Malthusians, just like the Malthusians of old, are wrong."
 

Saturday, May 31, 2014

Big Data: Causality and Local Expertise Are Key in Agronomic Applications

In a previous post Big Data: Don't throw the baby out with the bathwater, I made the case that in many instances, we aren't concerned with issues related to causality.

"If a 'big data' ap tells me that someone is spending 14 hours each week on the treadmill, that might be a useful predictor for their health status. If all I care about is identifying people based on health status I think hrs of physical activity would provide useful info.  I might care less if the relationship is causal as long as it is stable....correlations or 'flags' from big data might not 'identify' causal effects, but they are useful for prediction and might point us in directions where we can more rigorously investigate causal relationships"

But sometimes we are interested in causal effects. If that is the case, the article that I reference in the previous post makes a salient point:

"But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down."

“Big data” has arrived, but big insights have not. The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever."

I think that may be the instance in many agronomic applications of big data. I've written previously about the convergence of big data, genomics, and agriculture.  In those cases, when I think about applications like ACRES or Field Scripts, I have algorithmic approaches (finding patterns and correlations) in mind, not necessarily causation.

But Dan Frieberg points out some very important things to think about when it comes to using agronomic data in an corn and soybean digest article "Data Decisions: Meaningful data analysis involves agronomic common sense, local expertise." 

He gives an example where data indicates better yields are associated with faster planting speeds, but something else is really going on:

"Sometimes, a data layer is actually a “surrogate” for another layer that you may not have captured. Planting speed was a surrogate for the condition of the planting bed.  High soil pH as a surrogate for cyst nematode. Correlation to slope could be a surrogate for an eroded area within a soil type or the best part of the field because excess water escaped in a wet year."

He concludes:

"big data analytics is not the crystal ball that removes local context. Rather, the power of big data analytics is handing the crystal ball to advisors that have local context"

This is definitely a case where we might want to more rigorously look at relationships identified by data mining algorithms that may not capture this kind of local context.  It may or may not apply to the seed selection algorithms coming to market these days, but as we think about all the data that can potentially be captured through the internet of things from seed choice, planting speed, depth, temperature, moisture, etc this could become especially important. This might call for a much more personal service including data savvy reps to help agronomists and growers get the most from these big data apps or the data that new devices and software tools can collect and aggregate.  Data savvy agronomists will need to know the assumptions and nature of any predictions or analysis, or data captured by these devices and apps to know if surrogate factors like Dan mentions have been appropriately considered. And agronomists, data savvy or not will be key in identifying these kinds of issues.  Is there an ap for that? I don't think there is an automated replacement for this kind of expertise, but as economistTyler Cowen says, the ability to interface well with technology and use it to augment human expertise and judgement is the key to success in the new digital age of big data and automation.

References:

Big Data…Big Deal? Maybe, if Used with Caution. http://andrewgelman.com/2014/04/27/big-data-big-deal-maybe-used-caution/

See also: Analytics vs. Causal Inference http://econometricsense.blogspot.com/2014/01/analytics-vs-causal-inference.html



Thursday, May 29, 2014

AllAnalytics - Michael Steinhart - Doctors: Time to Unleash Medical Big Data

Examples:  "Correlating grocery shopping patterns with incidence of obesity and diabetes
Measuring response rates to cholesterol-lowering drugs by correlating pharmacy refills with exercise data from wearable sensors
Correlating physical distance to hospitals and pharmacies with utilization of healthcare services
Analyzing the influence of social network connections on lifestyle choices and treatment compliance."

http://www.allanalytics.com/author.asp?section_id=3314&doc_id=273502&f_src=allanalytics_sitedefault 

Friday, May 2, 2014

Big Data: Don't Throw the Baby Out with the Bathwater


"Data and algorithms alone will not fulfill the promises of “big data.” Instead, it is creative humans who need to think very hard about a problem and the underlying mechanisms that drive those processes. It is this intersection of creative critical thinking coupled with data and algorithms that will ultimately fulfill the promise of “big data.”

From:  http://andrewgelman.com/2014/04/27/big-data-big-deal-maybe-used-caution/


I couldn't agree more.  I think the above article is interesting because I think on one hand people can get carried away about 'big data' but on the other hand throw the big data baby out with the bath water. Its true, there is no law of large numbers that implies that as n approaches infinity selection bias and unobserved heterogeneity go away.  Correlations in large data sets still do not imply causation. But I don't think people that have seriously thought about the promises of 'big data' and predictive analytics believe that anyway. In fact, if we are trying to predict or forecast vs. make causal inferences,selection bias can be our friend. We can still get useful information from an algorithm. If a 'big data' ap tells me that someone is spending 14 hours each week on the treadmill, that might be a useful predictor for their health status. If all I care about is identifying people based on health status I think hrs of physical activity would provide useful info.  I might care less if the relationship is causal as long as it is stable. Maybe there are lots of other factors correlated with time at the gym like better food choices, stress management, or even income and geographic and genetic related factors.   But in a strictly predictive framework, this kind of 'healtheir people are more likely to go to the gym anyway'  selection bias actually improves my predicton without having to have all of the other data involved. The rooster crowing does not cause the sun to come up, but if I'm blindfolded and don't have an alarm clock, hearing the crow might serve as a decent indicator that dawn is approaching.  As long as I can reliabley identify healthy people I may not care about the causal connection between hours at the gym and health status, or any of the other variables that may actually be more important in determining health status. It may not be worth the cost of collecting it if I get decent predictions without it.  Similarly, if I can get a SNP profile that correlates with some health or disease status, it may tell me very little about what is really going on from a molecular or biochemical or 'causal' standpoint, but the test might be very useful. In both of these cases correlations or 'flags' from big data might not 'identify' causal effects, but they are useful for prediction and might point us in directions where we can more rigorously investigate causal relationships if interested, and 'big data' or having access to more data  or richer or novel data never hurts.  If causality is the goal, then merge 'big data' from the gym app with biometrics and the SNP profiles and employ some quasi-expermental methodology to investigate causality.

UPDATE: A very insightful and related article by  Tim Hartford:

http://timharford.com/2014/04/big-data-are-we-making-a-big-mistake/

"But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down."

“Big data” has arrived, but big insights have not. The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever."

This relates back to my earlier statements "As long as I can reliabley identify healthy people I may not care about the causal connection between hours at the gym and health status, or any of the other variables that may actually be more important in determining health status... In both of these cases correlations or 'flags' from big data might not 'identify' causal effects, but they are useful for prediction and might point us in directions where we can more rigorously investigate causal relationships if interested"

In a strictly algorithmic and predictive modeling context, the best we can do is assess generalization error through some form of cross validation or use of training, validation, and test data. And always, monitor the performance of our models so we can recognize when some of these correlatons begin to breakdown and update our models to incorporate new information if possible.

It is crucial that if we are interested in causality, we ensure that we are addressing these issues using the appropriate methodology. But again, if all I care about is an accurate prediction that is stable over time, it may not be worth the effort to find an instrumental variable that will help me identify causal effects or to seek out proper controls if all I need is a stable cost effective prediction.

See also: http://econometricsense.blogspot.com/2014/01/analytics-vs-causal-inference.html

Monday, April 28, 2014

How is it that Structural Equation Models Subsume Potential Outcomes?

I have been trying to figure out, under what conditions can we identify causal effects via SEMs, particularly, is there a framework similar to the Rubin Causal Model or potential outcomes framework that I can utilize in this attempt?  In search of an answer I ran across the following article:

Comments and Controversies
Cloak and DAG: A response to the comments on our comment
Martin A. Lindquist !, Michael E. Sobel
Neuroimage. 2013 Aug 1;76:446-9

On potential outcomes notation:

"Personally, we find that using this notation helps us to formulate problems clearly and avoid making mistakes, to understand and develop identi"cation conditions for estimating causal effects, and, very importantly, to discuss whether or not such conditions are plausible or implausible in practice (as above). Though quite intuitive, the notation requires a little getting used to, primarily because it is not typically included in early statistical training, but once that is accomplished, the notation is powerful and simple to use. Finally, as a strictly pragmatic matter, the important papers in the literature on causal inference (see especially papers by the 3R's (Robins, Rosenbaum, Rubin, and selected collaborators)) use this notation, making an understanding of it a prerequisite for any neuroimaging researcher who wants to learn more about this subject."

The notation definitely takes a little time getting use to, and it is also true for me that it was not discussed early on in any of my graduate econometrics courses. However, Angrist and Pischke's Mostly Harmless Econometrics does a good job making it more intuitive, with a little effort.  Pearl and Bollen both make arguments that SEMs 'subsume' the potential outcomes framework.While this may be true, its not straightforward to me yet. Although, I have not yet put forth the effort to figure this all out. But I agree it is important to understand how SEMs relate to potential outcomes and causality, or at least understand some framework to support their use in causal inference, as stated in the article:

"Our original note had two aims. First, we wanted neuroimaging researchers to recognize that when they use SEMs to make causal inferences, the validity of the conclusions rest on assumptions above and beyond those required to use an SEM for descriptive or predictive purposes. Unfortunately, these assumptions are rarely made explicit, and in many instances, researchers are not even aware that they are needed. Since these assumptions can have a major impact on the “finndings”, it is critical that researchers be aware of them, and even though they may not be testable, that they think carefully about the science behind their problem and utilize their substantive knowledge to carefully consider, before using an SEM, whether or not these assumptions are plausible in the particular problem under consideration."

I think the assumptions they provide 1-4b seem to lay a foundation in terms that make sense to me from a potential outcomes framework, and the authors hold that these are the assumptions one should think about before using SEMs for causal inference. Apparently Pearl had some issues with this approach.