{"id":2490,"date":"2015-06-08T22:22:46","date_gmt":"2015-06-08T21:22:46","guid":{"rendered":"http:\/\/www.blopig.com\/blog\/?p=2490"},"modified":"2015-06-08T22:22:46","modified_gmt":"2015-06-08T21:22:46","slug":"clustering-algorithms","status":"publish","type":"post","link":"https:\/\/www.blopig.com\/blog\/2015\/06\/clustering-algorithms\/","title":{"rendered":"Clustering Algorithms"},"content":{"rendered":"<p>Clustering is a task of organizing data into groups (called clusters), such that members of each group are more similar to each other than to members of other groups. This is a brief description of three popular clustering algorithms \u2013 <strong><a href=\"http:\/\/projecteuclid.org\/euclid.bsmsp\/1200512992\">K-Means<\/a>, <a href=\"http:\/\/www.sciencedirect.com\/science\/bookseries\/01678892\">UPGMA<\/a> and <a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/summary?doi=10.1.1.71.1980\">DBSCAN<\/a><\/strong>.<\/p>\n<div id=\"attachment_2487\" style=\"width: 584px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Clustering.png?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" aria-describedby=\"caption-attachment-2487\" loading=\"lazy\" class=\" wp-image-2487\" title=\"\" src=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Clustering.png?resize=574%2C396&#038;ssl=1\" alt=\"Cluster analysis\" width=\"574\" height=\"396\" srcset=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Clustering.png?w=603&amp;ssl=1 603w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Clustering.png?resize=300%2C207&amp;ssl=1 300w\" sizes=\"auto, (max-width: 574px) 100vw, 574px\" \/><\/a><p id=\"caption-attachment-2487\" class=\"wp-caption-text\">Cluster analysis<\/p><\/div>\n<p><a href=\"http:\/\/projecteuclid.org\/euclid.bsmsp\/1200512992\">K-Means<\/a> is arguably the <strong>simplest and most popular clustering algorithm<\/strong>. It takes one parameter \u2013 <strong>the expected number of clusters k<\/strong>. At the initialization step <strong>a set of k means<\/strong> m<sub>1<\/sub>, m<sub>2<\/sub>,\u2026, m<sub>k <\/sub>is generated (hence the name). At each iteration step the objects in the data are assigned to the cluster whose mean yields the least within-cluster sum of squares. After the assignments, the <strong>means are updated<\/strong> to be centroids of the new clusters. The procedure is repeated until <strong>convergence<\/strong> (the convergence occurs when the means no longer change between iterations).<\/p>\n<p>The main strength of K-means is that <strong>it&#8217;s simple and easy to implement<\/strong>. The largest drawback of the algorithm is that <strong>one needs to know in advance how many clusters the data contains<\/strong>. Another problem is that with <strong>wrong initialization<\/strong> the algorithm can easily converge to a <strong>local minimum<\/strong>, which may result in suboptimal partitioning of data.<\/p>\n<p><a href=\"http:\/\/www.sciencedirect.com\/science\/bookseries\/01678892\">UPGMA <\/a>is a simple <strong>hierarchical clustering method<\/strong>, where the distance between two clusters is taken to be the average of distances between individual objects in the clusters. In each step the closest clusters are combined, until all objects are in clusters where the average distance between objects is lower than a <strong>specified cut-off<\/strong>.<\/p>\n<p>The UPGMA algorithm is <strong>often used for construction of phenetic trees<\/strong>. The major issue with the algorithm is that <strong>the tree it constructs is ultrametric<\/strong>, which means that the distance from root to any leaf is the same. In context of evolution, this means that the UPGMA algorithm assumes a <strong>constant rate<\/strong> of accumulation of mutations, an assumption which is often incorrect.<\/p>\n<p><a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/summary?doi=10.1.1.71.1980\">DBSCAN <\/a>is a <strong>density-based algorithm<\/strong> which tries to separate the data into regions of high density, labelling points that lie in low-density areas as outliers. The algorithm takes <strong>two parameters &#8211; \u03b5 and minPts<\/strong> and looks for points that are <strong><em>density-connected<\/em><\/strong> with respect to \u03b5. A point p is said to be <strong><em>density reachable<\/em><\/strong> from point q if there are points between p and q, such that one can traverse the path from p to q never moving further than \u03b5 in any step. Because the <strong>concept of density-reachability is not symmetric<\/strong> a concept of density-connectivity is introduced. Two points p and q are density-connected if there is a point o such that both p and q are density reachable from o. A set of points is considered a cluster if all points in the set are mutually density-connected and the <strong>number of points in the set is equal to or greater than minPts<\/strong>. The points that cannot be put into clusters are classified as <strong>noise<\/strong>.<\/p>\n<div id=\"attachment_2491\" style=\"width: 538px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/summary?doi=10.1.1.71.1980\"><img data-recalc-dims=\"1\" decoding=\"async\" aria-describedby=\"caption-attachment-2491\" loading=\"lazy\" class=\" wp-image-2491\" title=\"\" src=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Reachability.png?resize=528%2C130&#038;ssl=1\" alt=\"a)Illustrates the concept of density-reachability.  b)  Illustrates the concept of density-connectivity \" width=\"528\" height=\"130\" srcset=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Reachability.png?w=641&amp;ssl=1 641w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Reachability.png?resize=300%2C74&amp;ssl=1 300w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2015\/06\/Reachability.png?resize=624%2C154&amp;ssl=1 624w\" sizes=\"auto, (max-width: 528px) 100vw, 528px\" \/><\/a><p id=\"caption-attachment-2491\" class=\"wp-caption-text\">a) Illustrates the concept of density-reachability.<br \/>b) Illustrates the concept of density-connectivity<\/p><\/div>\n<p>The DBSCAN algorithm <strong>can efficiently detect clusters with non-globular shapes<\/strong> since it is sensitive to changes in density only. Because of that, the clustering reflects real structure present in the data. The problem with the algorithm is <strong>the choice of parameter \u03b5<\/strong> which controls how large the difference in density needs to be to separate two clusters.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Clustering is a task of organizing data into groups (called clusters), such that members of each group are more similar to each other than to members of other groups. This is a brief description of three popular clustering algorithms \u2013 K-Means, UPGMA and DBSCAN. K-Means is arguably the simplest and most popular clustering algorithm. It [&hellip;]<\/p>\n","protected":false},"author":31,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","wikipediapreview_detectlinks":true,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"ngg_post_thumbnail":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"ppma_author":[493],"class_list":["post-2490","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"authors":[{"term_id":493,"user_id":31,"is_guest":0,"slug":"jaroslaw","display_name":"jaroslaw nowak","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/d3ddaccc000bffff018a1bdedaebefb1ac9454506ae5203150e7b3efd8dd9a6e?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts\/2490","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/users\/31"}],"replies":[{"embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/comments?post=2490"}],"version-history":[{"count":7,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts\/2490\/revisions"}],"predecessor-version":[{"id":2499,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts\/2490\/revisions\/2499"}],"wp:attachment":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/media?parent=2490"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/categories?post=2490"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/tags?post=2490"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=2490"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}