Formerly econometrics at undergrad level, but I've since done a masters and a research assistantship in statistics. My masters dissertation involved ensembles of SVM-learners, hence my partisanship. I've encountered a lot of people who use them like a black box, not realising they require a significant amount of tuning. Anyway, I'm now about 18 months into a statistics PhD and I've been working in the areas of model-based clustering and Bayesian Nonparametrics, with a particular focus on high-dimensional n<<p datasets. I don't know those authors, but I will check them out. Are you working in a research capacity?
Thanks for the heads up! Happy to hear my paper ignited some interest. I've since found a few bugs in my code and added more bells and whistles to the paper, and hope to release a new much-improved, much-faster version of the package very soon. Consequently, the next draft of the paper will have better results and should finally be ready to submit. So maybe you should hold off on reading it for the time being! Are you using R in your work?
reply
More replies New replies
)
Loading...
...2017-06-29 09:56:17.678381+00
hey, nice ratings! can I ask your reason for skipping half ratings? that's an interesting concept
i feel like kernelized svms should be higher than root lasso though. i'd probably slot in kNN at 1.5 or 2.5 too.