If you don't like to read, You haven't found the right way !

Web Street Smart

Machine Learning Algorithms

Spread the love

yes you should be very careful with selecting the right kind of solution for your model because if you don’t you might end up losing a lot of time energy and processing cost i won’t be naming the actual solutions because you guys aren’t familiar with them yet so we will be looking at it based on supervised unsupervised and reinforcement learning so let’s look into the factors that might help us select the right kind of machine learning solution first factor is the problem statement describes the kind of model you will be building or as the name suggests it tells you what the problem is for example let’s say the problem is to predict the future stock market prices so for anyone who is new to machine learning would have trouble figuring out the right solution but with time and practice you will understand that for a problem statement like this solution based on supervised learning would work the best for obvious reasons then comes the size quality and nature of the data if the data is cluttered you go for unsupervised if the data is very large and categorical we normally go for supervised learning solutions finally we choose the solution based on their complexity as for the problem statement wherein we predict the stock market prices it can also be solved by using reinforcement learning but that would be very very difficult and time consuming unlike supervised learning algorithms are not types of machine learning in the most simplest language they are methods of solving a particular problem.

 the first kind of method is a classification which falls under supervised learning classification is used when the output you are looking for is a yes or no or in the form a or b or true or false like if a shopkeeper wants to predict if a particular customer will come back to his shop or not he will use a classification algorithm the algorithms that fall under classification are decision tree knife base random forest logistic regression and knn.

 the next kind is regression this kind of method is used when the predicted data is numerical in nature if the shopkeeper wants to predict the price of a product based on its demand it would go for regression.

 the last method is clustering clustering is a kind of unsupervised learning again it is used when the data needs to be organized most of the recommendation system used by flipkart amazon etc make use of clustering another major application of it is in search engines the search engines study your old search history to figure out your preferences and provide you the best search results one of the algorithms that fall under clustering is k-means now that we know the various algorithms let’s look into four key algorithms that are used widely we will understand them with very simple examples the four algorithms that we will try to understand are k nearest neighbor linear regression decision tree and naive bayes let’s start with our first machine learning solution k nearest neighbor k near its neighbor is again a kind of a classification algorithm as you can see on the screen the similar data points form clusters the blue one the red one and the green one there are three different clusters now if we get a new and unknown data point it is classified based on the cluster closest to it or the most similar to it k in k n is the number of nearest neighboring data points we wish to compare the unknown data with let’s make it clear with an example let’s say we have three clusters in a cost to durability graph first cluster is of footballs the second one is of tennis balls and the third one is of basketballs from the graph we can say that the cost of footballs is high and the durability is less the cost of tennis balls is very less but the durability is high and the cost of basketballs is as high as the durability now let’s say we have an unknown data point we have a black spot which can be one kind of the balls but we don’t know what kind it is so what we’ll do we’ll try to classify this using knn so if we take k is equal to we draw a circle keeping the unknown data point in the center and we make sure that we have five balls inside that circle in this case we have a football a basketball and three tennis balls now since we have the highest number of tennis balls inside the circle the classified ball would be a tennis ball so that’s how k nearest neighbor classification is done linear regression is again a type of supervised learning algorithm this algorithm is used to establish linear relationship between variables one of which would be dependent and the other one would be independent like if we want to predict the weight of a person based on his height weight would be the dependent variable and height would be independent let’s have a look at it through an example let’s say we have a graph here showing a relationship between height and weight of a person let’s put the y-axis as h and the x-axis as weight so the green dots are the various data points these green dots are the data points and d is the mean squared error that is the perpendicular distances from the line to the data points are the error values this error tells us how much the predicted values vary from the original value let’s ignore this blue line for a while so let’s say if this is our regression line you can see the distance from all the data points from this line is very high so if we take this line as a regression line the error in the prediction will be too high so in this case the model will not be able to give us a good prediction let’s say we draw another regression line like this even in this case you can see that the perpendicular distance of the data points from the line is very high so the error value will still come as high as the last one.

 this model will also not be able to give us a good prediction so what to do so finally we draw a line which is this blue line so here we can see that the distance of the data points from the line is very less relative to the other two lines we drew so the value of d for this line will be very less so in this case if we take any value on the x-axis the corresponding value on the y-axis will be our prediction and given the fact that the d is very low our prediction should be good also this is how regression works we draw a line a regression line that is in such a way that the value of d is the least eventually giving us good predictions this algorithm that is decision tree is a kind of an algorithm you can very strongly relate to it uses a kind of a branching method to realize the problem and make decisions based on the conditions let’s take this graph as an example imagine yourself sitting at home getting bored you feel like going for a swim what you do is you check if it’s sunny outside so that’s your first condition if the answer to that condition is yes you go for a swim if it’s not sunny the next question you would ask yourself is if it’s raining outside so that’s condition number two if it’s actually raining you cancel the plan and stay indoors if it’s not raining then you would probably go outside and have a walk so that’s the final node that’s how decision tree algorithm works you probably use this every day it realizes a problem and then takes the decisions based on the answers to every conditions nybis algorithm is mostly used in cases where a prediction needs to be done on a very large dataset it makes use of conditional probability conditional probability is the probability of an event say a happening given that another event b has already happened this algorithm is most commonly used in filtering spam mails in your email account let’s say you receive a mail the model goes through your old spam mail records then it uses space theorem to predict if the present mail is a spam mail or not so p c of a is the probability of event c occurring when a has already occurred p a of c is the probability of event a occurring when c has already occurred and b c is the probability of event c occurring and p a is a probability of event a occurring let’s try to understand knife base with a better example nybase can be used to determine on which days to play cricket based on the probabilities of a day being rainy windy or sunny the model tells us if a match is possible if we consider all the weather conditions to be event a for us and the probability of a match being possible even c so the model applies the probabilities of event a and c into the base theorem and predicts if a game of cricket is possible on a particular day or not in this case if the probability of c of a is more than . we can be able to play a game of cricket if it’s less than. we won’t be able to do that’s how the knife-based algorithm works so that brings us to the end of the article I hope you guys understood the concepts put your doubts and feedback in the comments below and stay tuned for more articles.

Similar Articles

Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Instagram

Most Popular