Polynomial Regression Python- SKLEARN 2020 [New Research🔥]




[ad_1]

In this tutorial video, we learned the Polynomial Regression in Python using Sklearn in 2020. We used sklearn linear regression after using PolynomialFeatures from sklearn to make a polynomial regression eq. Go through the description to know definitions and learn the most:-

What is Polynomial regression? (Source: Wiki)
In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x).

Polynomial Regression in Python using Scikit learn:
Polynomial regression is a special case of linear regression (linear_model). For this, we perform linear regression after creating some polynomial features using scikit learn package.

With scikit learn (Sklearn), we can use these 2 functions: Polynomialfeatures and LinearRegression. In the video, I showed how to use them in detail (step by step).

I used Housing.csv data for making this video; this file is saved on my google Drive. In order to see how to get free access to my google drive, continue reading.

Splitting the data for training and testing purpose:
train_test_split functionality of sklearn is used to split and randomize the data. In the end, through coef_ and intercept_ , we got coefficients and the intercept.

Matplotlib: To visualize the data, we used matplotlib.pyplot library. For the range of the x-axis, we used arange. You can use linspace as well if you want. And then we got the regression curve.

Evaluation: For r2 data (evaluation), we imported r2_score from sklearn.metrics.
Make sure to subscribe so you don’t miss out on my future videos.

✅ Subscribe: https://www.youtube.com/channel/UCyMifqUrSntvvrrGMaVPkrw?sub_confirmation=1
After subscribing, get free access to my Google Drive: Follow steps on my YouTube Channel.

📺 Watch Sentiment Analysis [Python] Text Mining (NLP) Keras Tokenizer: https://youtu.be/o4zEqZ8Aim4

👔Join the LinkedIn Group for peer to peer discussion: https://lnkd.in/f6ijsF2

👍 Like my Facebook Page to be updated: https://www.facebook.com/Financial.Programming.with.Ritvik/?modal=admin_todo_tour

👉🏼Join the WhatsApp group and learn more: https://chat.whatsapp.com/DFPKbgfrO2NG2hE6z4ppmW

👉🏼 Follow me on Quora: https://www.quora.com/profile/Ritvik-Dashora

⌚TimeStamps:
Introduction (0:00)
Get FREE access to my Google Drive (1:43)
What is Polynomial Regression (4:05)
Code (5:15)
train test split (8:12)
Polynomial Features (12:22)
Sklearn modeling (17:00)
Regression Curve (21:00)
R square (27:35)

I used Anaconda jupyterlab for this code. It is an amazing platform for beginners.

I hope you like my work on the Financial Programming with Ritvik YouTube Channel. Please support me by subscribing to my channel and sharing my videos with your friends. On this channel, I put videos on how to use your programming skills in finance. I also talk about some upcoming Fintech trends, automation, and artificial intelligence in finance or ai in finance, specifically, ai in financial services. I target to spread the uses of python in finance. There is a huge growth of ai in finance industry finance machine learning using python and we should be ready for that.

Source


[ad_2]

Comment List

  • Financial Programming with Ritvik
    December 21, 2020

    was really helpful, and thanks for the clear explanation on polynomial regression. Hope to see various regression analysis through GUI once please. Thanks

  • Financial Programming with Ritvik
    December 21, 2020

    Thanks for showing polynomial regression. Need clarification, when you are using x1^2 as a different variable (thus, making the quadratic equation as a multi-linear regression, we will have issue of multicolleniarity, between x1 and x1^2 variables. And one of the assumptions for linear regression model is :

    No Multicollenearity – i.e., two and more variables shouldn't have high correlation b/w each other

    Considering that this assumption is getting violated, so our result in your example can't be right. Please correct my understanding, if wrong.

Write a comment