As a beginner in this field, I think your videos make the whole process quite clear and easy to understand. Thank you so much Professor Raschaka. Keep creating excellent content plz :)
omg you have a youtube channel! I've been working through your Python ML book for months now. I've been struggling on page 228 so I went to youtube. 5 minutes into this video I was like, "this guy is using all the same examples from the book" lol to my surprise it was you! This is amazing! I'll have to go back and watch some of your other lectures!
Thank you so much for sharing the lectures. They are really helpful!
Wonderful illustrations, thanks a lot!
Very nice explanation
very helpful video!
So many thanks to great explication
Many thanks for sharing the lecture video. Page 16: it seems that instead of ceiling function k>ceil(n/2), floor function should be usedm k>floor(n/2). if we have 15 base estimators, then k>7.
Thank You <3
So in the majorty voting, final prediction value is 100%, whereas average voting prediction value is average sum of all class prediction?
Professor Raschka, Please allow me to ask. Is there any theoretical background for what algorithm we can and what we can't combine into a voting classifier? Say, we built a model from the X and y using - Logistic regression with Lasso penalty, - Logistic regression with Elastic net penalty, - Decision Trees, - Random Forest, - AdaBoost, - XGboost for each one showing various accuracy results from stratified k-fold cross-validation. Is it acceptable to create a soft voting classifier (or weighted voting classifier) made from the Decision Trees model and Random Forest? (considering random forest itself is already a combination and soft-voting of several decision trees). Does it acceptable to create a voting classifier consisting of XGBoost and Adaboost? Does it acceptable to create a voting classifier consisting of logistic regression with lasso penalty and other logistic regression with the elastic net penalty? (considering elastic net already a combination of lasso and ridge I understand that we are free to do anything with our data, I believe combining similar models will help at least narrow the standard deviation from the cumulative average of the cross-validation split. But, is it theoretically acceptable? Thank you for your patience. I am sorry for the beginner question. Good luck to everyone.
how can i refer to that material? if i want to refer that in my work?
Hello sir, May I know how voting classifier can be implemented in CNN model approach?
Does all the classifiers have similar weights(in voting) ? If yes then classifier with legit high accuracy would have equal weighted vote as compared to vote by a worse classifier. Please answer me. I am stuck sir. Help.
There seems to be an issue of zip function argument not being iterable in the ensemble.voting classifier method does anyone know how to solve it ?
@sassk73