ul

1
Enrollment No.- 1205024747 Paper Code : MC0088 4 | Page Worker process isolation mode enables you to completely separate an application in its own process, with no dependence on a central process such as Inetinfo.exe to load and execute the application. All requests are handled by worker processes that are isolated from the Web server itself. Process boundaries separate each application pool so that when an application is routed to one application pool, applications in other application pools do not affect that application. By using application pools, you can run all application code in an isolated environment without incurring a performance penalty. 5. Differentiate between K-means and Hierarchical clustering. Answer: Hierarchal clustering is the sort that we might apply when there is a "tree" structure to the data. Think of the classification of living things. At the top, all of them, then splitting into plants, animals and other things. Once we are on the animal branch, these splits into mammals, reptiles, etc., and we can keep going until we get down to individual species. AT NO TIME, when things have been split off from the rest of the data onto one of the branches, do subsets ever move to other branches. We might think about whether this is appropriate for our data. Once we have split our data up into two sets this split is final, and the process only subdivides further - nothing from set one ever moves back into set two. K-means clustering does not assume a tree structure. In its pure form we might ask the computer - split these data values into three groups or four groups, but we can't guarantee that merging two groups from the four-group solution will produce the same as the three- group solution. If we have only two or three dimensions (or can sensibly reduce our data by factor analysis) we can plot it and see what sort of relationships we have. Are we looking for nice spherical clusters, or are long chains more suitable? We might consider that our data values were generated from multivariate normal random variables from groups with different means, and we might consider how best to identify these groups and their means. Sometimes data values fall into such clear groups that almost all clustering methods will find the same clusters. Where the boundaries are fuzzy, the solutions may be very different. I'll end with a little parable. Suppose I have a very willing idiot working for me, and I ask him to arrange my books nicely. He might do this by author or by subject, or by the color of the cover, or the size of the book, or by weight, or by date of publication. If I simply ask for a "nice arrangement" I ought not to complain about any of these, and I might find one or more useful. If we just ask SPSS to use cluster analysis to produce a "nice arrangement"

description

ultimate file

Transcript of ul

  • Enrollment No.- 1205024747 Paper Code : MC0088 4 | P a g e

    Worker process isolation mode enables you to completely separate an application in its own process, with no dependence on a central process such as Inetinfo.exe to load and execute the application. All requests are handled by worker processes that are isolated from the Web server itself. Process boundaries separate each application pool so that when an application is routed to one application pool, applications in other application pools do not affect that application. By using application pools, you can run all application code in an isolated environment without incurring a performance penalty.

    5. Differentiate between K-means and Hierarchical clustering.

    Answer: Hierarchal clustering is the sort that we might apply when there is a "tree" structure to the data. Think of the classification of living things. At the top, all of them, then splitting into plants, animals and other things. Once we are on the animal branch, these splits into mammals, reptiles, etc., and we can keep going until we get down to individual species. AT NO TIME, when things have been split off from the rest of the data onto one of the branches, do subsets ever move to other branches. We might think about whether this is appropriate for our data. Once we have split our data up into two sets this split is final, and the process only subdivides further - nothing from set one ever moves back into set two.

    K-means clustering does not assume a tree structure. In its pure form we might ask the computer - split these data values into three groups or four groups, but we can't guarantee that merging two groups from the four-group solution will produce the same as the three-group solution.

    If we have only two or three dimensions (or can sensibly reduce our data by factor analysis) we can plot it and see what sort of relationships we have. Are we looking for nice spherical clusters, or are long chains more suitable?

    We might consider that our data values were generated from multivariate normal random variables from groups with different means, and we might consider how best to identify these groups and their means.

    Sometimes data values fall into such clear groups that almost all clustering methods will find the same clusters. Where the boundaries are fuzzy, the solutions may be very different.

    I'll end with a little parable. Suppose I have a very willing idiot working for me, and I ask him to arrange my books nicely. He might do this by author or by subject, or by the color of the cover, or the size of the book, or by weight, or by date of publication. If I simply ask for a "nice arrangement" I ought not to complain about any of these, and I might find one or more useful. If we just ask SPSS to use cluster analysis to produce a "nice arrangement"