@nimeshkumar9613

It's really nice to see how smartly you have explained PCA

@shahdsg4527

I used it in sign language recognition by considering the entire sequence as a single pattern

@tysonmarks2578

My understanding is that PCA compresses features into few components and that this may help improve model performance.  What if stakeholders need to understand which features are most important? Can those PCA components be expanded back to their original feature forms to obtain feature importance scores?

@hengkevin6755

This awesome, now I can train my model faster 💪💪

@lkd982

you make it sound so simple! :)  ... so where's the catch? ;)

@mikeplockhart

What are your thoughts on UMAP for high dimensionality data sets?

@MrDacruz19

This is 👍

@squadgang1678

Is feature scaling required before pca?

@deniz-gunay

how do you deploy it after pca?

@thespicycabbage

you lose transparency when you're using PCA just an FYI. Usually this is very hard to explain to stakeholders/customers and if they don't understand it, they won't like it

@rahulkiroriwal8779

What about mutual info regression from skleaen.feature selection ?? Any difference in efficiency????

@murataavcu

Is there any version for deep learning?

@Matan-Ben

It doesn't always work, if you have categorical data it won't eork

@fuba44

Sounds good, i have like 1700 features, but after reducing the dimensions, how would I infer the model? Like what is the input after reducing the dimensions?

@rayyanamir8560

Doesn't this underfit the model?