An intuition and implementation summary of Sequential Model-Based Optimization Algorithm for Bayesian Hyper-Parameter Tuning

Summarize math intuition and demonstrate Correlation, Multicollinearity and Exploratory Factor Analysis for feature selection

*Complete Feature Selection Techniques*

*Statistical Test & Analysis**Correlation Analysis**Dimension Reduction**Model Driven*

Summarize math intuition and demonstrate PCA, LDA, MDS, ISOMAP, T-SNE, UMAP for feature dimension reduction

*Complete Feature Selection Techniques*

*Statistical Test & Analysis**Correlation Analysis**Dimension Reduction**Model Driven*

Both Principal Component Analysis(PCA) and Linear Discriminant Analysis(LDA) are linear transformation techniques.

Linear transformation can** scale and rotate **data points from original axis(a.k.a space) into a new axis, and the data distribution will be changed accordingly during the scaling and rotating.

So what PCA and LDA do, is under a certain criteria to find the best transformation(scale and rotate).

The following is an example of Linear Transformation and Orthogonal Projection.

**x**…

Explain and demonstrate Mutual Information, Chi-Square Test, ANOVA F-Test, Regression t-Test and Variance Check for model feature selection

*Complete Feature Selection Techniques*

*Statistical Test & Analysis**Correlation Analysis**Dimension Reduction**Model Driven*

In statistics, Mutual Information (MI) of two random variables is a measure of the mutual dependence between the two variables. MI is equal to zero if two random variables are independent, and higher values mean higher dependency.

For feature selection, we can use MI to measure the dependency of a feature variable and target variable. MI can be represented as below:

**I**(**x **, **y**) = **H**(**y**) - **H**(**y**|**x**)

A complete explanation for LightGBM - The Fastest Gradient Boosting Model

**LightGBM **is a Gradient Boosting Decision Tree Model(GBDT) developed by Microsoft in 2016, compared with other GBDT models, LightGBM is most featured by its faster training efficiency and great accuracy.

There is no fundamental structure difference between LightGBM and general Gradient Boosting Decision Tree model, but with the following special techniques, LightGBM make itself faster in training.

- Gradient-based One-Side Sampling(GOSS)
- Histogram Based Best Value Search in Tree Node Splitting
- Optimal Split for Categorical Features
- Exclusive Feature Bundling
- Leaf-wise Tree Growth Strategy
- Parallel optimization

The classic tree based gradient boosting(GBDT)…

A complete explanation for XGBoost - Most popular Gradient Boosting Model

XGBoost is derived from Gradient Boosting Model(GBM), compared with GBM, XGBoost introduces a different way to train the ensemble weaker leaner, so let’s start from here.

Suppose we have the below ** K** trees boosting mode

Detailed explanation on the algorithm of Variational Autoencoder Model

My math intuition summary for the Variational Autoencoders (VAEs) will base on the below classical Variational Autoencoders (VAEs) architecture.

A detailed explanation for CatBoost - Most Delicate Gradient Boosting Model

**CatBoost **is one Gradient Boosting Decision Tree model introduced by Yandex in 2017. Compared with XGBoost and LighGBM, CatBoost is believe to be better in accuracy and easier to use for categorical data.

The technical innovations of CatBoost are

- Ordered Target Statistics (TS) Encoding for Categorical Feature
- Ordered Boosting
- Uses oblivious decision trees and Combining features for tree node splitting

Let’s firstly go through General CatBoost training steps.

Introduce and demo how to estimate model feature importance use feature permutation, column drop, SHAP values and model specific metrics

One most asked question when review a machine learning mode with business is “*what are the major factors impact model prediction?*”

To response this question, Data scientist always use Feature Importance as part of the answer. So, in this story we will explore Feature Importance and go through three ways to estimate it.

- Built-in Model Specific Feature Importance
- Permutation and Drop Importance
- SHAP values based Feature Importance

One important point regarding the Feature Importance, normally, when we talking about feature…

**Part 2. LIME for Image and Text Model Interpretation**

Before we start exploring how to use LIME to explain Image and Text model, let’s quickly review LIME intuition introduced in Part. **1****. **(Please understand Part. 1 intuition for better reading experience)

Normally, LIME constructs a surrogate linear regression model to approximate black-box predictions on one observation and neighborhood of the observation.

So, in order to train the surrogate linear regression model, we need build a training dataset, with the training dataset, we can resolve the surrogate linear regression model. …

Data Scientist & Engineer from Sydney