Analysis of Indus basin snow cover

download Analysis of Indus basin snow cover

of 55

Transcript of Analysis of Indus basin snow cover

  • 8/10/2019 Analysis of Indus basin snow cover

    1/55

  • 8/10/2019 Analysis of Indus basin snow cover

    2/55

    ii

    (JULY 2013)

    A Report

    On

    Snow Cover Mapping in Indus basin

    using Remote Sensing

    Submitted By:

    Vinay Kumar G 2011A4PS318H

    Prepared in partial fulfilment of the

    Practice School-I Course No. BITS C-221

    AT

    National Institute of Hydrology, Roorkee

    A Practice School-I station of

    BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI

    (JULY, 2013)

  • 8/10/2019 Analysis of Indus basin snow cover

    3/55

  • 8/10/2019 Analysis of Indus basin snow cover

    4/55

    iv

    ACKNOWLEDGEMENT

    I would like to thank all those who supported me during the Practice School at NIH. I would like

    to thank Dr. D.S.Rathore and Dr. Tanveer Ahmed sir who supervised my training here at National

    Institute of Hydrology.

    I would also like to thank Dr. S.K. Jain and Dr. V.C. Goyal who mentored our training at

    National Institute of Hydrology. I would like to thank our instructor Dr. Chandra Shekhar and

    our co-instructor Siddharth Arora who invested their effort and time for me. I would also like to

    thank all my friends who helped me during my training program.

  • 8/10/2019 Analysis of Indus basin snow cover

    5/55

  • 8/10/2019 Analysis of Indus basin snow cover

    6/55

    vi

    LIST OF ILLUSTRATIONS

    IMAGES

    1. Components of remote sensing

    2. Differences between Active and Passive remote sensing

    3. Electro-magnetic spectrum

    4. Energy interactions with atmosphere

    5. Types of noise in remotely sensed data

    6. Satellite image and its Fourier transformation image

    7.

    Image classification using pixel values (spectral signal)8. An illustration showing the flow direction evaluation process

    9. Raster calculator used to reassign pixel values in flow accumulation raster

    10.Model used for snow delineation in ERDAS IMAGINE

    11.Indus basin

    12.SRTM 250 DEM used for watershed delineation

    13.SRTM 250 DEM along with Indus basin watershed

    14.FILL raster of Indus basin

    15.FLOW DIRECTION raster of Indus basin

    16.FLOW ACCUMULATION raster of Indus basin

    17.FLOW ACCUMULATION raster along with OUTLET point

    18.Indus basin DEM with classified ELEVATION ZONES

    19.Snow Cover raster on 09 March 07

    20.Snow Cover raster on 18 April 07

    21.Snow Cover raster on 04 May 07

    22.Snow Cover raster on 05 June 07

    23.Snow Cover raster on 07 July 07

    24.Snow Cover raster on 08 August 07

    TABLES

    1. Zonal statistics of Indus basin

    2. Zonal statistics of Snow Cover on Indus basin on 09 March 07

    3. Zonal statistics of Snow Cover on Indus basin on 18 April 07

    4. Zonal statistics of Snow Cover on Indus basin on 04 May 07

    5. Zonal statistics of Snow Cover on Indus basin on 05 June 07

    6. Zonal statistics of Snow Cover on Indus basin on 07 July 07

    7. Zonal statistics of Snow Cover on Indus basin on 08 August 07

  • 8/10/2019 Analysis of Indus basin snow cover

    7/55

    vii

    8. Combined Zonal statistics of Snow Cover on Indus basin on all dates

    GRAPHS

    1.

    Spectral reflectance graph of various features

    2. Hypsometric graph of elevation zones of Indus basin

    3. Snow Cover Comparison graph of elevation zones of Indus basin on all dates.

  • 8/10/2019 Analysis of Indus basin snow cover

    8/55

    1

    1. INTRODUCTION

    1.1 Remote sensing

    Remote sensing refers to techniques used for collecting information about an object and its

    surroundings from a distance without physically contacting them. For example, when we look at

    a location we employ remote sensing. When we hear something we employ remote sensing. Our

    eyes, ears act as sensors and we collect data in the form of various energies. This data is

    processed in our brain. Similarly in remote sensing, scanners are used to acquire information

    about a particular object in the form of electro-magnetic energy. These are recognized then

    depending on their know properties.

    When looked at as a whole, remote sensing comprises of five basic components.

    Image 1; Source [1]

    An energy source (SUN)

  • 8/10/2019 Analysis of Indus basin snow cover

    9/55

    2

    Interaction of this energy with articles in atmosphere (B)

    Subsequent interaction with ground target (C)

    Energy recorded by sensor as data (D)

    Data displayed for interpretation and processing (F)

    There are 2 major types of remote sensing.

    Passive Remote Sensing:

    Detect naturally occurring radiation (reflected/emitted) from the terrain of interest.

    Usually it is sun or heat emitted by earth.

    Active Remote Sensing:

    Detect reflected or backscattered radiation from the terrain of interest. In this type energy

    is artificially emitted.

    RADAR is one such example. Time lag between emission and return is measured and height,

    distance etc features are measured.

    Image 2; Source [2]

  • 8/10/2019 Analysis of Indus basin snow cover

    10/55

    3

    Radiation Principles

    Remote sensing can be done using various sensors to obtain information about our terrain

    of interest. But mostly electromagnetic energy sensors are used widely in both airborne and

    spaceborne platforms.

    Electromagnetic energy is a form of energy in which visible light is only a part. It is

    classified as radio waves, microwaves, infrared, visible, ultraviolet, x rays, rays, cosmic rays

    depending on their wavelengths. All these radiations travel at the velocity of light following

    wave theory. These waves also follw wave theory and are defined by wave charecteristics like

    wavelength, frequency, energy etc

    Image 3; Source [3]

    Wavelengths that are of greatest interest in remote sensing are

  • 8/10/2019 Analysis of Indus basin snow cover

    11/55

    4

    Visible 0.4 -0.7 m

    Near infrared 0.7 1.3 m

    Mid infrared 1.3 3 m

    Thermal infrared 3 14 m

    Microwave 1 mm -1m

    1.2 Energy interactions in the atmosphere

    Energy interactions in the atmosphere with the objects of interest are the main reason why

    remote sensing is possible with electromagnetic radiation. Every object responds differently

    when electromagnetic radiation is incident on it. And using this we differentiate various features

    on a terrain.

    Electromagnetic energy that is incident on an object gets absorbed, transmitted and reflected. All

    of these again depend on the texture and nature of the terrain.

    ABSORPTION occurs when radiation penetrates into the body through the surface and the

    energy is used by the molecules. It might be emitted back in any other form of energy. Emitted

    radiation is useful for thermal studies. Every object absorbs energy upto some extent.

    TRANSMISSION occurs when the radiation passes through the object and is not absorbed.

    Transmission means the radiation is not interacting with the body. Transmitted radiation is of

    least importance for remote sensing.

    REFLECTION occurs when radiation is neither absorbed nor transmitted. It is the phenomenon

    in which the radiation is sent back in to the atmosphere. Reflection is a property that depends on

    the surface texture and nature of the surface of the object. Hence reflected radiation is used to

    differentiate between features on the terrain.

    The geometric manner in which a body reflects is also important.

    Specular reflectors are flat reflectors that manifest mirror like reflections. Near specular

    reflectorsare those which are not perfect like mirrors but tend to be.

  • 8/10/2019 Analysis of Indus basin snow cover

    12/55

    5

    Diffuse reflectors are rough surfaces with uniform reflection in all directions. Near diffuse

    reflectorsare those that reflect in all directions but not uniform.

    Image 4; Source [4]

    Every object when electromagnetic radiation is incident it absorbs a little, transmits and also

    reflects some amount of energy no matter how small it is.

    EI () = ER () + EA () +ET()

    EI is the incident energy. ER is the reflected energy. EA is the energy absorbed.ET is the energy

    transmitted.

    REFLECTANCE OF RADIATIONis the property that is used to distinguish between features.

    It is the percentage of incident energy that a surface reflects back. It is a fixed characteristic of an

    object. Unique objects might show different reflectance if there is a physical or chemical change.

    Reflectance is not the same as reflection.

    Reflectance is not obtained to be the same value always because of energy interactions

    with the atmosphere. No matter what kind of radiation for it to travel to the terrain of interestfrom the source and from there to the sensors it must pass through space. And this space always

    interacts with the radiation and might cause changes in the radiation. This again depends on the

    length of path, strength of signal, wavelength and some other factors. The effects caused through

    these interactions are because of 1) SCATTERING

    2) ABSORPTION

  • 8/10/2019 Analysis of Indus basin snow cover

    13/55

  • 8/10/2019 Analysis of Indus basin snow cover

    14/55

    7

    Graph 1; Source: [5]

    In the above graph we can observe characteristic spectral signatures of some features like water,

    soil and vegetation. Even though termed as signature these graphs cant be considered as unique

    because of the atmospheric interactions and also spatial and temporal effects. As discussed above

    atmospheric interactions effect the signals and so they change according to the atmosphere,

    climate which is temporal. And hence spectral signature is never unique. Temporal effectscan be

    observed as climatic changes like clouds can be effecting the signal, reflectance can be different

    if it is rainy due to which soil might become moist. Spatial effectsrefer to factors that cause same

    types of features at a given point of time vary at different geographical locations. Some spatial

    effects are climates, types of soil, practices in that locations etc

    1.3 DATA ACQUISITION

    Remote sensing can be done with either passive or active sensors. As discussed above passive

    remote sensing involves natural energy sources while active involve man-made artificial energy

    sources. Electromagnetic radiation emitted from the sources, interacts with the atmosphere,

    features of interest. Combined, these factors result in energy Signals from which we extract

    information. Detection of these signals can be done in two different ways.

    Photographically

    Electronically

    Photographic processes use chemical reactions on the surface of a photographic films. These

    methods are relatively simple and cheaper. They provide a high degree of spatial detail and

    geometric integrity. These films act as both detecting and recording medium.

    Electronic sensors generate an electronic signal corresponding to the energy variations in the

    original scene. They are more advantageous than photographic methods because of the broader

    spectral range of sensitivity, improved calibration potential and can transmit data electronically.

    But they are not as cheap and simple as photographic methods. Electronic sensors record data on

    magnetic tapes. These are later converted into photographs depending upon the requirements.

    These films act as only recording medium.

  • 8/10/2019 Analysis of Indus basin snow cover

    15/55

    8

    Analog image digitization:

    Data obtained is further analysed digitally. If the data has been obtained in photographical

    methods then it is converted into digital format and then analysed. Analog image digitizationcan be done by

    Optical mechanical scanning

    Linear or area array photodiode (or) charge coupled device (ccd) digitization

    Video digitization

    Storage of digital data:

    Popular formats for storing digital data are

    Band Sequential (BSQ)

    Band interleaved by line (BLL)

    Band interleaved by pixel (BLP)

    Run length encoding.

    1.4 DATA ANALYSIS

    The analysis of remotely sensed data is performed using a variety of image processing

    techniques, including

    Analog (visual) image processing of the hard copy data

    Appling digital image processing algorithms to digital data

    Analog Image Processing:

    Most of the fundamental elements of image interpretation are used in Analog image analysis, for

    example size, shape, shadow, tone or colour, texture, site & association.

    Digital Image Processing:

  • 8/10/2019 Analysis of Indus basin snow cover

    16/55

    9

    Digital image processing involves developing and rectifying images with computer aid.

    Fundamental methods involved in digital image processing are image rectification or restoration,

    image enhancement and image classification. Digital analysis is mostly dependent on colour and

    tone of the individual pixels. Digital images are usually processed by the computers on the basis

    of some equations and results in some more pictures or tabular values etc

    Image rectification or restoration:

    Image restoration and rectification techniques involve the correction of distortion, degradation

    and noise during the imaging process. This involves both radiometric and geometric corrections.

    To correct the data internal and external errors must be detected. Internal errors can be due to the

    sensors or technical failures. External errors can be like atmospheric interactions which tend to

    distort the signals. These processes are usually called pre-processing of the digital imagery.

    Geometric Correction:

    Sometimes the images can be so distorted geometrically that they cannot be used for the

    processing. This may be caused by various factors like altitude, velocity if the sensor, curvature

    of the earth, earths rotations (often cause for panoramic distortions), atmospheric refractions,

    relief displacements etc So geometric corrections rectify these geometric distortions to an

    extent such that they can be used again.

    Symmetric distortions can be easily rectified by developing a mathematic model depending upon

    the source of distortion and then applying some transformations corresponding to it.

    Random distortions are usually corrected by geo-referencing the Ground Control Points on the

    map to their coordinates. An equations is thus developed by calculating the least squares

    regression and thus resampled.

    Radiometric corrections

    Radiometric corrections are required to rectify the errors caused because of changes in scene

    illumination, viewing geometry, atmospheric conditions etc Viewing geometry corrections are

    required in air-borne remote sensing more than in space-borne remote sensing.

    Noise removal:

  • 8/10/2019 Analysis of Indus basin snow cover

    17/55

    10

    Image noise is any unwanted disturbance in image data that is due to limitation in the sensing,

    signal digitization, or data recording process. The potential sources of noise range from periodic

    drift or malfunction of a detector, to electronic interference between sensor components, to

    intermittent hiccups in the data transmission and recording sequence. Noise can either degrade

    or totally mask the true radiometric information content of a digital image.

    Image 5; Source [6]

    Image Enhancement:

    Image enhancement algorithms are applied to an image to increase interpretability and

    appearance of the image data. Image enhancement always depends upon the requirements of the

    user and no ideal image enhancement. Image enhancement techniques are used for easier

    processing and to decrease the complexity of the image and also to lose unwanted information

    from the image.

    Image enhancement techniques can be classified as

    Point operations: modify the brightness values of the pixel independently.

  • 8/10/2019 Analysis of Indus basin snow cover

    18/55

    11

    Local operations: modify the brightness values of the pixel based on neighbouring

    pixels

    Both operations can be done on any kind of imagery. Image enhancements are done after the

    image restoration process and before image classification process.

    Most commonly used image enhancement techniques are

    Contrast manipulation: gray-level thresholding, level slicing and contrast stretching.

    Spatial feature manipulation: spatial filtering, Fourier analysis, edge-enhancement.

    Multi-image manipulation: multispectral band ratioing and differencing, principal

    components, canonical components, intensity-hue-saturation (I) colour space

    transformations and de-correlation stretching.

    Contrast manipulation:

    Gray-level thresholding

    It is used to classify an image into two classes.

    Ex: -one for all pixels with gray level greater than user defined value

    One for all pixels with gray level lesser than user defined value.

    Thresholding is usually used to develop binary masks and later these masks are used to

    operate on the image separately on each class without effecting the other.

    Level slicing

    Level slicing is a technique where the DNs distributed along the x-axis of an image

    histogram are divided into a series of user defined intervals or slices. All the DNs in

    the same interval are assigned a single DN. Each level can also be shown as a single

    colour.

    Level slicing is used extensively in the display of thermal infrared images in order to

    show discrete temperature ranges coded by gray level or colour.

    Contrast stretching

    Contrast stretching is a technique wherein a particular set of the gray scale level is

    stretched to the complete gray scale level.

    Ex: -consider an image with gray scale varying from 80 to 190. Now these gray scale

    values are stretched from 80 190 to 0 255. Hence this will increase the

    interpretability of the image and all features are more distinguished.

  • 8/10/2019 Analysis of Indus basin snow cover

    19/55

    12

    This stretching can be done on any basis like linear stretching, depending on the

    frequency or we can omit some values also.

    Spatial feature management:

    Spatial filtering

    Spatial filtering is a local operation. Spatial filters emphasize or deemphasize various

    spatial frequencies of an image. Spatial frequency means the roughness of the tonal

    variations in an image. If the gray level of pixels change very abruptly over a small area

    then it is said to be rough tonal area or high spatial frequency and vice-versa.

    Low pass filters emphasize low frequency detail and deemphasize high frequency detail

    while high pass filters emphasize high frequency detail and deemphasize low frequency

    detail. Low pass filters can be used to reduce random noise.

    Edge enhancement

    Edge enhancement delineates the edges of the shapes and details of an image and hence

    making it more conspicuous and easy to interpret. These edges may be enhanced using

    linear edge enhancement or non-linear edge enhancement.

    Linear edge enhancement is done by applying a directional first difference algorithm

    which approximates the first derivative between two adjacent pixels. Edge smoothness

    or roughness depends on the kernel size that operates. Larger the kernel, smoother theedge.

    Non-linear edge enhancements are performed using non-linear combinations of the

    pixels. Many algorithms are applied using various sized kernels. Sobels edge detector

    and Roberts edge detector are some non-linear edge enhancement operators.

    Fourier analysis

    Fourier analysis is a mathematical technique for separating an image into its various

    spatial frequency components. Fourier magnitude images are symmetric about the

    centre and the intensity at the centre represents magnitude of lowest frequency

  • 8/10/2019 Analysis of Indus basin snow cover

    20/55

  • 8/10/2019 Analysis of Indus basin snow cover

    21/55

    14

    With supervised classification, we identify examples of the Information classes (i.e., land cover

    type) of interest in the image. These are called training sites. The image processing software

    system is then used to develop a statistical characterization of the reflectance for each

    information class. This stage is often called signature analysis and may involve developing a

    characterization as simple as the mean or the rage of reflectance on each bands, or as complex as

    detailed analyses of the mean, variances and covariance over all bands. Once a statistical

    characterization has been achieved for each information class, the image is then classified by

    examining the reflectance for each pixel and making a decision about which signature it

    resembles the most.

    Image 7; Source [8]

    Maximum likelihood classification

    Maximum likelihood Classification is a statistical decision criterion to assist in

    the classification of overlapping signatures; pixels are assigned to the class of

    highest probability.

    The maximum likelihood classifier is considered to give more accurate results

    than parallelepiped classification however it is much slower due to extra

    computations. We put the word `accurate' in quotes because this assumes that

    classes in the input data have a Gaussian distribution and that signatures were

    well selected; this is not always a safe assumption.

  • 8/10/2019 Analysis of Indus basin snow cover

    22/55

    15

    Minimum distance classification

    Minimum distance classifies image data on a database file using a set of 256

    possible class signature segments as specified by signature parameter. Each

    segment specified in signature, for example, stores signature data pertaining to a

    particular class. Only the mean vector in each class signature segment is used.

    Other data, such as standard deviations and covariance matrices, are ignored

    (though the maximum likelihood classifier uses this).

    The result of the classification is a theme map directed to a specified database

    image channel. A theme map encodes each class with a unique gray level. The

    gray-level value used to encode a class is specified when the class signature is

    created. If the theme map is later transferred to the display, then a pseudo-colour

    table should be loaded so that each class is represented by a different colour.

    Parallelepiped classification

    The parallelepiped classifier uses the class limits and stored in each class

    signature to determine if a given pixel falls within the class or not. The class

    limits specify the dimensions (in standard deviation units) of each side of a

    parallelepiped surrounding the mean of the class in feature space.

    If the pixel falls inside the parallelepiped, it is assigned to the class. However, if

    the pixel falls within more than one class, it is put in the overlap class (code255). If the pixel does not fall inside any class, it is assigned to the null class

    (code 0).

    The parallelepiped classifier is typically used when speed is required. The

    drawback is (in many cases) poor accuracy and a large number of pixels

    classified as ties (or overlap, class 255).

    Unsupervised classification

    Unsupervised classification is a method which examines a large number of unknown pixels and

    divides into a number of classed based on natural groupings present in the image values. Unlike

    supervised classification, unsupervised classification does not require analyst-specified training

    data. The basic premise is that values within a given cover type should be close together in the

  • 8/10/2019 Analysis of Indus basin snow cover

    23/55

    16

    measurement space (i.e. have similar gray levels), whereas data in different classes should be

    comparatively well separated (i.e. have very different gray levels).

    The classes that result from unsupervised classification are spectral classed which based on

    natural groupings of the image values, the identity of the spectral class will not be initially

    known, must compare classified data to some form of reference data (such as larger scale

    imagery, maps, or site visits) to determine the identity and informational values of the spectral

    classes. Thus, in the supervised approach, to define useful information categories and then

    examine their spectral seperability; in the unsupervised approach the computer determines

    spectrally separable class, and then define their information value.

    Unsupervised classification is becoming increasingly popular in agencies involved in long term

    GIS database maintenance. The reason is that there are now systems that use clustering

    procedures that are extremely fast and require little in the nature of operational parameters. Thus

    it is becoming possible to train GIS analysis with only a general familiarity with remote sensing

    to undertake classifications that meet typical map accuracy standards. With suitable ground truth

    accuracy assessment procedures, this tool can provide a remarkably rapid means of producing

    quality land cover data on a continuing basis.

  • 8/10/2019 Analysis of Indus basin snow cover

    24/55

    17

    2. DATA SOURCES, SOFTWARE AND METHODOLOGY

    2.1 DataSources

    SRTM 250:

    NASA Shuttle Radar Topographic Mission (SRTM) data were acquired by Radar on-board 11-

    day mission of NASA Shuttle in 2003. The data are available at 250 m resolution for nearly 80%

    of the earth surface. Data are freely available in 5 tiles. Data have few gaps in high mountains,

    deserts and water bodies etc., which are filled through processing in GIS and are made available

    by Consultative Group for International Agricultural Research-Consortium for Spatial

    Information (CGIAR-CSI). Recently, a derived elevation DEM is also made available at 250 m.

    MODIS:

    The product provides 8- day composite of the surface reflectance in 1 to 7 bands at 500 m

    resolution for MODIS sensor. The product is gridded level - 3 product. Projection system is

    sinusoidal. The product is derived by selecting an observation from MODIS daily L2G products

    over 8 day period. The selection is based on several factors e.g. maximum observation coverage,

    low view angle, absence of cloud or its shadow, aerosol loading etc. The accompanying data are

    quality assessment, day of observation, solar azimuth, and view and zenith angles. The version-

    five product is validated Stage two product. It has been validated spatially and temporally

    through ground truth etc. The data is recommended for scientific use. Data is available in tiles of

    size 10 in HDF-EOS format. The file size in pixels and lines is 2400 X 2400.

  • 8/10/2019 Analysis of Indus basin snow cover

    25/55

    18

    2.2 Soft wares

    Arc-GIS:

    Esris ArcGIS is a geographic information system (GIS) for working with maps and geographicinformation. It is used for: creating and using maps; compiling geographic data; analyzing

    mapped information; sharing and discovering geographic information; using maps and

    geographic information in a range of applications; and managing geographic information in a

    database. The system provides an infrastructure for making maps and geographic information

    available throughout an organization, across a community, and openly on the Web.

    ArcGIS desktop consists of several integrated applications including ArcMap, ArcToolbox,

    ArcCatalog, and ArcGlobe.

    ArcCatalog -used to organize and manage your GIS data. It also allows you to preview

    datasets and view and manage metadata.

    ArcMap -used to view, edit, and analyse spatial data and create maps.

    ArcScene -provides the interface for viewing multiple layers of 3D data, visualizing 3D

    data on a 2D surface data, creating 3D surfaces, analysing 3D surfaces.

    ArcToolbox -is a component of ArcCatalog, ArcMap and ArcScene. It contains tools for

    Geo-processing, data conversion, and defining map projections

    MODIS reprojection tool:

    The product is developed for use with higher level MODIS raster (gridded) data products

    e.g. MOD09A1 available in HDF-EOS data format with Sinusoidal projection. In the software

    mosaicking, sub setting (spatial and spectral), reprojection and format change functionalities are

    made available. Both command line and GUI interfaces are available. The product is available on

    multiple platforms. Output formats supported are raw binary, HDF-EOS and GeoTIFF. Input data

    of 8, 16 and 32 bit signed/ unsigned integer and 32 bit float are supported. Output data type is

    same as input. Several projections are supported including LCC, UTM, Albers equal area,

    geographic etc. Resampling available are nearest neighbour, bilinear and cubic convolution.

    When multiple input files are selected, these are mosaicked. Using band selector, any number of

    bands may be moved to output list. In GeoTIFF output format, separate output files are created

  • 8/10/2019 Analysis of Indus basin snow cover

    26/55

  • 8/10/2019 Analysis of Indus basin snow cover

    27/55

    20

    TheFilltool in theHydrologytoolbox is used to remove any imperfections (sinks) in the

    digital elevation model. A sinkis a cell that does not have a defined drainage value

    associated with it. Drainage values indicate the direction that water will flow out of the cell,

    and are assigned when creating a flow direction grid for the landscape. The resulting drainage

    network depends on finding the 'flow path' of every cell in the grid, so it is important that the

    fill step be performed prior to creating a flow direction grid.

    Double-click the Filltool to open its dialog.

    TheInput surface rasteris the DEM grid.

    Leave theZ limitblank and click OKto run the tool. Note that this process is CPU intensive,

    and may take quite some time depending on the processing power of your workstation.

    Once the fill process is complete, a new grid will be added to the data frame. There should be

    a difference in the lowest elevation value between the original DEM and the filled DEM.

    Remove the original DEM layer from the map (right-click > Remove).

    2. Create Flow Direction:

    A flow direction grid assigns a value to each cell that indicates the direction of flow that is,

    the direction that water will flow from that particular cell. This is an extremely important step

    in hydrological modelling, as the direction of flow will determine the ultimate destination ofthe water flowing across the surface of the landscape.

    Flow direction grids are creating using the Flow Direction tool. For every 3x3 cell

    neighbourhood, the grid processor finds the lowest neighbouring cell from the centre. Each

    number in the matrix below corresponds to a flow direction that is, if the centre cell flows

    due north, its value will be 64; if it flows northeast, its value will be 128, etc. These numbers

    have no numeric meaning, but are simply a coded directional value that indicates the steepest

    descent based on elevation.

  • 8/10/2019 Analysis of Indus basin snow cover

    28/55

    21

    Image 8; Source: [9]

    Double-click theFlow Direction tool to open it. TheInput surface rastershould be set to the

    filled DEM. The Output flow direction raster should once again default to your working

    directory. Open theEnvironment Settingsusing theEnvironmentsbutton and confirm that the

    Raster Analysis > Cell Sizeis set to the same as your filled DEM.

    Click OK to run the tool. This process will take some time to complete, and once it has run a

    new flow direction raster will be added.

    3. Create Flow Accumulation:

    TheFlow Accumulationtool calculates the flow into each cell by accumulating the cells that

    flow into each downslope cell. In other words, each cell's flow accumulation value is

    determined by calculating the number of upstream cells that flow into it.

    Double-click the Flow Accumulationtool to open it.

    TheInput flow direction rastershould be set to the flow direction grid created in Step 3.

    The Output accumulation raster will default to your working directory.

    Accept all other defaults, check the Environment Settings to ensure that the Raster Analysis

    > Cell Sizeproperty is set to the same as your filled DEM, and click OKto run the tool. This

    process may take quite some time to complete.

    The new flow accumulation raster will be added to your map. Each cell in the grid contains a

    value that represents the number of cells upstream from that particular cell. Cells with higher

    flow accumulation values should be located in areas of lower elevation, such as in valleys or

    drainage channels.

    Flow direcon matrix Flow direcon is north, cell is coded 64

  • 8/10/2019 Analysis of Indus basin snow cover

    29/55

    22

    It is very likely that the flow accumulation grid will appear dark and uninformative when

    first added to the map. This can be fixed by altering the symbolization of the layer. Use the

    raster calculator tool in spatial analyst-map algebra to change the symbology of flow

    accumulation raster. Use theset nullfunction. Change the pixel values of the raster. Syntax of

    the function is Setnull(flow_accumulation raster

  • 8/10/2019 Analysis of Indus basin snow cover

    30/55

    23

    stations, or another data source. However, it is also possible to create pour points yourself.

    The instructions below include both procedures.

    Creating pour points through visual inspection:

    Open the ArcCatalog window ( ). Right-click on your working directory and select New >Shape file. Create a new point shape file, give it a descriptive name and apply the appropriate

    projection information (the coordinate system should be the same as the DEM or Flow

    Direction Grid you will be using). Click OK. The new, empty point layer will be added to

    your map.

    Zoom in to your area of interest so that you are able to see the individual flow accumulation

    cells. Use the Identify tool ( ) to examine the values of the flow accumulation grid. The

    chosen pour point cell should be a natural outlet for the streams flowing above it and must be

    on the high flow accumulation path. Your choice essentially determines the end of your

    catchment; everything upstream from this point will define a single watershed.

    To add a pour point, open the Editor Toolbar (Customize > Toolbars > Editor) and choose

    Editor > Start Editing.

    If necessary, in the Start Editingdialog, highlight the empty pour point layer and click OK.The Create Featureswindow will open. Highlight the pour point shape file and then move

    your cursor onto your map. Add a pour point by clicking in the centre of the high flow

    accumulation cell you have chosen as your outlet point. Try to place points in the centre of

    the cells. Also remember to place the points 1 or 2 cells away from stream confluences.

    If you are defining only one watershed then save your edits, stop the editing session and

    move on to Step 5.

    If you are creating more than one watershed, add a pour point for each watershed then save

    your edits and exit the editing session. Open the attribute table for the layer by right-clicking

    the layer name and selecting Open Attribute Table. Click the Table Optionsicon ( ) and

    select Add Field. Create a field of type Integer,precision 0 and call it UNIQUEID. Start

    another editing session and enter an ID number for each individual pour point (1, 2, 3, and so

  • 8/10/2019 Analysis of Indus basin snow cover

    31/55

    24

    on). Stop editing and choose to save your edits. Watersheds are delineated based on unique

    identification numbers, so this step ensures that a separate watershed will be delineated for

    each individual pour point.

    Loading pour points from an existing file:

    In many cases you will already have a shape file that indicates the locations of hydrometric

    gauging stations or watershed outlet points. If this is the case, add your point file to ArcMap.

    Zoom in to each point to determine if it falls on the path of high flow accumulation. As

    mentioned, if the pour points are not situated in cells of high flow accumulation then the

    resultant watersheds will be very small. The Snap Pour Pointtool used in the next step will

    attempt to snap the pour points to the closest area of high flow accumulation.

    5. Snap pour point:

    Select Geo processing > Environments and set theProcessing Extent andRaster Analysis >

    Cell Sizeproperties to the same as your flow accumulation grid (or the wfg layer included

    with the Enhanced Flow Direction grid).

    The Snap Pour Point tool accomplishes two things; it snaps the pour point(s) created or

    loaded in the previous step to the closest area of high flow accumulation, and it converts the

    pour points to the raster format needed for input to the Watershedtool.

    Double-click theSnap Pour Pointtool to open it.

    ClickEnvironments > Raster Analysis Settings > Cell Sizeand ensure that the cell size is set

    to the same as your flow accumulation layer. Click OK.

    TheInput raster or feature pour point datais the pour point layer created in Step 5.

    ThePour Point Fieldis the unique ID field created in Step 5 this is only applicable if you

    are creating more than one watershed.

    TheInput accumulation rasteris your flow accumulation layer.

    The Output rasterwill default to your working directory.

    The Snap distance is the specified distance (in map units) that the tool will use to search

    around your pour points for the cell of highest accumulated flow. The snap distance should

    be based on the resolution of your data and may require some trial and error to determine the

  • 8/10/2019 Analysis of Indus basin snow cover

    32/55

  • 8/10/2019 Analysis of Indus basin snow cover

    33/55

    26

    2.3.2 Elevation Zones:

    Any raster DEM can be classified into zones depending on elevation basis. Since DEM contains

    the elevation of every pixel, the lowest and the highest point of the study area can be found from

    properties of the DEM. After finding the highest and lowest elevations of the study area, classify

    the elevations depending on the requirement. Consider 1000m wide zones, then then zones are 0 -

    1000; 1001-2000; 2001-3000 .

    DEM can be classified into elevation zones by using reclassify tool in ArcMAP.

    1. Add the DEM using add layer option in ArcMap.

    2. Find the lowest and highest elevations of the DEM from the properties.

    3. Choose the width of the zones into which you want to classify the DEM.

    4.

    Elevation zones can be created using any tool found in the reclass option. (Arc Toolbox>

    Spatial analyst> Reclass)

    5. Reclass by ASCII file requires the classification data in a specified format (syntax) in any

    ASCII editable extension. Then that file is to be provided as input so that classification can

    be done.

    6. Reclassify tool can be used to create elevation zones.

    7. The input raster is the DEM file that you wish to reclassify.

    8.

    The reclass field is the data on whose basis the classification is to be done. Enter the attribute

    field that contains the elevation data in this field.

    9. After inputting the above data, click on classify next to the reclass table field. Change the

    method to defined intervaland enter the elevation zone width in the interval field. Press ok.

    Now change the first value in the table to zero.

    10.Enter the new valuesthat you wish in their respective fields.

    11.Enter the location where you wish the reclassified data to be saved.

    2.3.3 Snow Mapping:

    The data required for snow mapping can be obtained from MODIS as mentioned above. Using

    the MODIS reprojection tool required bands from tiles of study area can be mosaicked,

    resampled and projected simultaneously.

  • 8/10/2019 Analysis of Indus basin snow cover

    34/55

    27

    1. Input all the files (.hdf) that comprise the snow data of the study area in the input field.

    2. Select the bands 2, 4, 6 which are required for snow mapping and exclude the rest of the

    bands.

    3. Specify an output location for the files to be produced.

    4. MODIS reprojection tool provides some basic projections. Select a projection depending on

    the requirement. Enter the data required for reprojecting the data files. Set the Datum

    according to your requirement.

    5. Set the resampling to nearest neighbour or others depending on the requirements.

    6. Now run the program and the files are mosaicked and output is produced for each band

    separately.

    After getting the bands 2, 4 and 6 of MODIS data separately mosaicked and reprojected then

    they are processed in ERDAS IMAGINE to delineate the snow.

    1. Start ERDAS IMAGINE.

    2. Go to interpreter tab> utilities> layer stack.

    3. Input the band 2 file in input file field. Now click on addbutton. Now input the band 4 file in

    input field.Again click on addbutton. Similarly input the band 6 file in input field.Now click

    on addbutton. After adding all the three band layers toggle on theIgnore zero in stats field.

    4. Specify the output location in output file field and click ok.

    5. After the 3 bands are stacked on a single image, use modeler to delineate snow.

    6. Create a model that delineates snow using modeler> model maker.

    7. In the model shown below all the circle blocks contain the functions by which the data files

    are processed.

    8. The zigzag polygonal blocks indicate either input or output.

    9. The model prompts for an input file when run. Input the output file that has been obtained

    from stacking bands 2, 4, 6.

  • 8/10/2019 Analysis of Indus basin snow cover

    35/55

    28

    10.This data is processed by $n1_PROMPT_USER(2)-$n1_PROMPT_USER(3) and

    $n1_PROMPT_USER(2), the output of these processed data is then provided as input to

    n4_memory andn6_memory. This data is then processed by either-1IF and output is input for

    n10_memory. Data in n10_memory is then processed by either 1 IF and output is submitted

    by n8_memory.

    Image 10

    11.Output obtained is the final delineated snow map.

  • 8/10/2019 Analysis of Indus basin snow cover

    36/55

    29

    12.This map is then added to the DEM file and the file with elevation zones for finding various

    values like snow cover in each zone.

    2.3.4 Zonal Statistics:

    Statistics of every elevation zone can be calculated using tools in ArcMAP.

    Zonal statistics can be used to compare zones among themselves or to find the specifics of a zonelike median elevation, area occupied etc which can be later analysed using graphs and other

    means.

    Zonal statistics can be calculated using zonal statistics (Arc toolbox> Spatial Analyst tools>

    Zonal)

    1. Input the file containing zone data (output file obtained by reclassification) in input raster or

    feature zone data.

    2. In thezone field input the attribute value on whose basis the zones have been classified or the

    field that defines the zones.

    3. In the input value raster input the raster file whose values you wish to calculate statistics for.

    4. Specify the output location where you wish to save the statistics table.

    5. Specify the type of statistics you require in specify statistics field or you can calculate all the

    types by selecting all.

    These statistics are generated in database format. This can be exported to excel file if required.

  • 8/10/2019 Analysis of Indus basin snow cover

    37/55

    30

  • 8/10/2019 Analysis of Indus basin snow cover

    38/55

    31

    3. STUDY AREA

    The Indus basin is formed by Indus and its tributaries. Indus River is a major river in Asia which

    flows through Pakistan and India. It also has courses through western Tibet. Originating in the

    lake Manasa Sarovar, the river runs through Jammu and Kashmir, Himachal Pradesh and Punjab

    in India. It flows for over 3180 km. The river has a total drainage area exceeding 1165000 km2.

    Annual flow of the river is estimated to be 207 km3. . The part of Indus Basin that is studied in

    this report lies in between 76oeast 36onorth and 81oeast and 31o north.

    Image 11; Source [14]

  • 8/10/2019 Analysis of Indus basin snow cover

    39/55

    32

    4. RESULTS

    4.1 Watershed of Indus basin:

    DEM file that covers Indus Basin is taken from SRTM 250. All the method discussed above is

    done and Indus basin watershed has been delineated. After the watershed raster is converted to

    polygon file and used to extract just the basin from the whole SRTM 250 tile.

    Image 12

    Image 13

  • 8/10/2019 Analysis of Indus basin snow cover

    40/55

    33

    The watershed extends from 76oeast 36onorth and 81oeast and 31o north. It occupies a total of

    178339.1 sq km.

    Image 14

    This is the DEM of Indus basin after fill tool is applied to remove sink pixels which dont have

    any drainage data.

  • 8/10/2019 Analysis of Indus basin snow cover

    41/55

    34

    Image 15

    Flow direction tool applied on the DEM after applying fill tool. This determines the direction inwhich the water flows out from the pixel and the direction in which the water enters the pixel.

  • 8/10/2019 Analysis of Indus basin snow cover

    42/55

    35

    Image 16

    Flow accumulation tool is applied on the flow direction raster of Indus basin DEM. After flow

    accumulation tool, raster calculator is used to make the flow accumulation raster more discrete

    by resetting the pixel value to 2 values (0 and 1) only. Thus changing higher flow accumulation

    values to 1 and lower flow accumulation pixel values to 0. Thus it helps in visually identifying

    the pour point of the basin.

  • 8/10/2019 Analysis of Indus basin snow cover

    43/55

    36

    Image 17

    The green circle on the image indicates the outlet point of the basin. Outlet point is the location

    where all the water flows to from all the watershed. Outlet point always lies on high flow

    accumulation pixels. Outlet point can also be thought of as the exit point of the basin or the

    lowest elevation point in the basin. Outlet point is downstream of the whole basin and the whole

    basin is upstream of outlet point of the basin.

  • 8/10/2019 Analysis of Indus basin snow cover

    44/55

  • 8/10/2019 Analysis of Indus basin snow cover

    45/55

    38

    ZONE PIXELS AREA (sq km)

    MIN

    ELEV

    MAX

    ELEV RANGE MEAN STD MEDIAN

    1 321 16.98 948 1000 52 989.77 9.43 993

    2 37943 2007.18 1001 2000 999 1616.34 264.62 1652

    3 138910 7348.33 2001 3000 999 2578.02 278.48 2610

    4 442473 23406.82 3001 4000 999 3584.40 279.50 3619

    5 1409476 74561.28 4001 5000 999 4564.68 271.91 4586

    6 1277074 67557.21 5001 6000 999 5394.74 256.30 5362

    7 63025 3334.02 6001 7000 999 6210.16 212.02 6137

    8 2012 106.43 7001 7997 996 7255.86 208.96 7199

    9 16 0.84 8022 8572 550 8263.37 183.93 8203TABLE1

    The above table is an analysis of the elevation zones of Indus basin obtained using zonal

    statistics tool.The column pixels refers to the total number of pixels present in the zone.

    The column area refers to the total area occupied by the zone in square meters.

    The column mean refers to the mean elevation of the zone.

    The column median refers to the median elevation in the zone.

    The column std refers to the standard deviation of the elevation in the zone.

    From observation it can be noticed that zones 5 and 6 occupy most of the area in Indus basin.

    GRAPH 2

    The above graph is a hypsometric graph with cumulative area on x-axis and elevation on y-axis.

    948

    1948

    2948

    3948

    4948

    5948

    6948

    7948

    0 20000 40000 60000 80000 100000 120000 140000 160000

    Elevation

    Cumulative Area

    HYPSOMETRIC GRAPH

  • 8/10/2019 Analysis of Indus basin snow cover

    46/55

  • 8/10/2019 Analysis of Indus basin snow cover

    47/55

    40

    SNOW COVER (18 APRIL 07)

    Image 20

    ZONES PIXELS

    AREA (sq

    km)

    PIXELS WITH

    SNOW

    SNOW

    COVER(sq km)

    1 321 16.98 0 0

    2 37941 2007.07 0 0

    3 138907 7348.18 388 20.525

    4 442451 23405.66 96489 5104.26

    5 1408970 74534.51 591694 31300.61

    6 1275673 67483.1 632157 33441.11

    7 62936 3329.31 59093 3126.02

    8 2012 106.43 1967 104.05

    9 16 0.84 11 0.58

    TOTAL 1381799 73097.17TABLE 3

  • 8/10/2019 Analysis of Indus basin snow cover

    48/55

    41

    SNOW COVER (04 MAY 07)

    Image 21

    ZONES PIXELS AREA(sq km)

    PIXELS WITH

    SNOW

    SNOW

    COVER(sq km)

    1 321 16.98 0 0

    2 37941 2007.07 0 0

    3 138907 7348.18 25 1.32

    4 442451 23405.66 33073 1749.56

    5 1408970 74534.51 425619 22515.25

    6 1275673 67483.1 596392 31549.14

    7 62936 3329.31 58185 3077.988 2012 106.43 1968 104.10

    9 16 0.84 16 0.84

    TOTAL 1115278 58998.21

    TABLE 4

  • 8/10/2019 Analysis of Indus basin snow cover

    49/55

  • 8/10/2019 Analysis of Indus basin snow cover

    50/55

    43

    SNOW COVER (07 JULY 07)

    Image 23

    ZONES PIXELS AREA(sq km)

    PIXELS WITH

    SNOW

    SNOW COVER

    (sq km)

    1 321 16.98 0 0

    2 37942 2007.13 0 0

    3 138910 7348.33 21 1.11

    4 442473 23406.82 2339 123.73

    5 1409459 74560.38 87520 4629.80

    6 1277036 67555.2 307150 16248.24

    7 63020 3333.75 50101 2650.34

    8 2012 106.43 1889 99.92

    9 16 0.84 16 0.84

    TOTAL 449036 23754

    TABLE 6

  • 8/10/2019 Analysis of Indus basin snow cover

    51/55

    44

    SNOW COVER (08 AUGUST 07)

    Image 24

    ZONES PIXELS AREA(sq km)

    PIXELS WITH

    SNOW

    SNOW COVER

    (sq km)

    1 321 16.98 0 0

    2 37942 2007.13 0 0

    3 138910 7348.33 148 7.82

    4 442473 23406.82 1668 88.23

    5 1409459 74560.38 38349 2028.66

    6 1277036 67555.2 198124 10480.76

    7 63020 3333.75 42016 2222.64

    8 2012 106.43 1521 80.46

    9 16 0.84 12 0.63

    TOTAL 281838 14909.23

    TABLE 7

  • 8/10/2019 Analysis of Indus basin snow cover

    52/55

    45

    Snow mapping of Indus basin was done for six different dates. The image files obtained after

    delineating snow using ERDAS IMAGINE are added to the DEM of Indus basin and the

    elevation zones. Zonal statistics of the snow map image with respect to the elevation zone map is

    calculated and analysed. Care is to be taken that all the maps are in the same projection and all

    the maps are of same pixel size or resolution.

    Area of the snow cover has been calculated for every date zone wise also. The data has been

    analysed in tabular and graphical format.

    zone 1 zone 2 zone 3 zone 4 zone 5 zone 6 zone 7 zone 8 zone 9

    09-Mar 0 0 445.41 13240.39 45078.47 50472.05 3194.36 89.29 0.58 112520.6

    18-Apr 0 0 20.52 5104.26 31300.61 33441.11 3126.02 104.05 0.58 73097.17

    04-May 0 0 1.32 1749.56 22515.25 31549.14 3077.98 104.10 0.84 58998.22

    05-Jun 0 0 0.47 304.28 15172.67 26206.92 3086.23 104.42 0.84 44875.86

    07-Jul 0 0 1.11 123.73 4629.80 16248.24 2650.34 99.92 0.84 23754.0108-Aug 0 0 7.82 88.23 2028.66 10480.76 2222.64 80.46 0.63 14909.23

    TABLE 8

    GRAPH 3

    0

    20

    40

    60

    80

    100

    120

    15-Feb 7-Mar 27-Mar 16-Apr 6-May 26-May 15-Jun 5-Jul 25-Jul 14-Aug 3-Sep

    %OFZONECOVEREDWITHSNOW

    DATE

    SNOW COVER COMPARISION

    zone 1

    zone 4

    zone 2

    zone 3

    zone 5

    zone 6

    zone 7

    zone 8

    zone 9

  • 8/10/2019 Analysis of Indus basin snow cover

    53/55

    46

    5. CONCLUSION

    As observed in the graphs and tables discussed above, it is clear that the area of snow cover

    decreased drastically during May, June, July months which is supported by the fact that it is

    summer season. Snow cover area in zone 9 didnt follow the trend due to the fact that it is at a

    high altitude of more than 8000m from main sea level and also since zone 9 comprises of small

    area.

  • 8/10/2019 Analysis of Indus basin snow cover

    54/55

  • 8/10/2019 Analysis of Indus basin snow cover

    55/55

    Bibliography:

    1.

    Remote sensing and image interpretation by Lilesand and Kiefer.

    2. Remote sensing and GIS applications by P.S Roy and R.S Dwivedi

    3. Introductory Digital Image Processing by John R Jensen