haha, Sociology as well -- especially now that the web provides huge amounts of behavioral data.
I much prefer how machine learning folks tend to approach predictive accuracy, though I guess that's not quite the same as understanding relationships between specific variables while controlling for others.
There is a notion of 'feature importance,' which especially comes up in decision trees and random forests, giving a notion of how much a particular feature contributes to the overall prediction. It seems like combining predictive power with feature importance would be an interesting alternate route to demonstrating important correlations. (For example, maybe a model predicts lung cancer with 90% precision, and 'is_smoker' has a 80% feature importance.) Of course, these importances depend a lot on the other features used by the model! If you include a lot of junk features and/or exclude other important features, the importance of your pet feature will shoot up.
Hmm, interesting -- I never considered the idea of including junk features to bias model's preferences of whatever theoretically ambiguous idea you're trying to promote. That's actually brilliant.
Shameless plug; I'm a co-author of a method that leverages adding artificial junk features and removing original ones that are likely nonsense to approximate the set of all features that are relevant to the problem (rather than standard make best model, which may be pretty deceiving). https://m2.icm.edu.pl/boruta
I much prefer how machine learning folks tend to approach predictive accuracy, though I guess that's not quite the same as understanding relationships between specific variables while controlling for others.
reply