Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data...

16
© 2014 - 2015 Waterline Data, Inc. All rights reserved. All other trademarks are the property of their respective owners. Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 and VirtualBox Product Version 2.0 Document Version 10.15.2015

Transcript of Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data...

Page 1: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

© 2014 - 2015 Waterline Data, Inc. All rights reserved.

All other trademarks are the property of their respective owners.

Waterline Data Inventory

Sandbox Setup Guide

for HDP 2.2 and VirtualBox

Product Version 2.0 Document Version 10.15.2015

Page 2: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Overview

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 2

Table of Contents

Overview .......................................................................................................................................... 2

Related Documents ......................................................................................................................... 3

System requirements ...................................................................................................................... 3

Setting up the sandbox .................................................................................................................... 4

Opening Waterline Data Inventory in a browser ............................................................................ 5

Running Waterline Data Inventory .................................................................................................. 5

Exploring the sample cluster ........................................................................................................... 6

Shutting down the cluster ............................................................................................................... 8

Accessing the Hadoop cluster using SSH ......................................................................................... 9

Loading data into HDFS ................................................................................................................... 9

Running Waterline Data Inventory jobs ........................................................................................ 12

Monitoring Waterline Data Inventory jobs ................................................................................... 14

Configuring additional Waterline Data Inventory functionality .................................................... 16

Accessing Hive tables .................................................................................................................... 16

Overview

Waterline Data Inventory reveals information about the metadata and data quality of files in a Apache™ Hadoop® cluster so the users of the data can identify the files they need for analysis and downstream processing. The application installs on an edge node in the cluster and runs MapReduce jobs to collect data and metadata from files in HDFS and Hive. It then discovers relationships and patterns in the profiled data and stores the results in its metadata repository. A browser application lets users search, browse, and tag HDFS files and Hive tables using the benefits of the collected metadata and Data Inventory’s discovered relationships.

This document describes running Waterline Data Inventory in a virtual machine image that is pre-configured with the Waterline Data Inventory application and sample cluster data. The image is built from Hortonworks™ HDP 2.2 sandbox on Oracle® VirtualBox™.

Page 3: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Related Documents

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 3

Related Documents

Waterline Data Inventory User Guide (also available from the menu in the browser application)

For the most recent documentation and product tutorials, sign into Waterline Data Inventory support (support.waterlinedata.com) and go to "Product Downloads, Documentation, and Tutorials":

System requirements

Waterline Data Inventory sandbox is available inside the Hortonworks HDP 2.2 sandbox. The system requirements and installation instructions are the same as Hortonworks describes:

hortonworks.com/products/hortonworks-sandbox/#install

The Waterline Data Inventory sandbox is configured with 8 GB of physical RAM rather than the default of 4 GB.

The basic requirements are as follows

For your host computer:

At least 10 GB of RAM

64-bit computer that supports virtualization.

VirtualBox describes the unlikely cases where your hardware may not be compatible with 64-bit virtualization:

www.virtualbox.org/manual/ch10.html#hwvirt

Page 4: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Setting up the sandbox

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 4

Operating system supported by VirtualBox, including Microsoft® Windows® (XP and later), many Linux distributions, Apple® Mac® OS X, Oracle Solaris®, and OpenSolaris™.

www.virtualbox.org/wiki/End-user_documentation

VirtualBox virtualization application for your operating system. Download the latest version from here:

www.virtualbox.org

Waterline Data Inventory VM image built on Hortonworks HDP 2.2 sandbox, VirtualBox version.

www.waterlinedata.com/downloads

Browser compatibility

Microsoft Internet Explorer 10 and later (not supported on Mac OS) Chrome 36 or later Safari 6 or later Firefox 31 or later

Setting up the sandbox

1. Install VirtualBox.

2. Download the Waterline Data Inventory VM (.ova file)

3. Open the .ova file with VirtualBox (double-click the file).

4. Click Import to accept the default settings for the VM.

This will take a few minutes to expand the archive and create the guest environment.

5. (Optional) Configure a way to easily move files between the host and guest. Some options are: Configure a shared directory between the host and guest. (Settings > Shared

Folders, specify auto-mount). From the guest computer, you can access the shared folder at /media/sf_<shared folder name>).

Setup a bi-directional clipboard

6. Start the VM.

It will take a few minutes for Hadoop and its components startup.

7. Note the IP address used for SSH access, such as 127.0.0.1 so that you can log into the guest machine through SSH as waterlinedata/waterlinedata.

Page 5: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Opening Waterline Data Inventory in a browser

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 5

Opening Waterline Data Inventory in a browser

The sandbox includes pre-profiled data so you can see the functionality of Waterline Data Inventory before you load your own data.

1. Open a browser to the Waterline Data Inventory application:

http://localhost:8082

or

http://<IP address from step 7>:8082

2. Sign into Waterline Data Inventory as "waterlinedata", password “waterlinedata”.

Running Waterline Data Inventory

If for some reason the browser application did not appear, you may need to sign into the guest and start Waterline Data Inventory manually. If so, follow these steps:

1. Start an SSH session.

(Mac OSX) Open a terminal or command prompt on the host and connect to the guest.

$ ssh [email protected] -p2222

Enter the password when prompted ("waterlinedata").

(Windows) Start an SSH client such as PuTTY and identify the connection parameters: Host Name: the guest IP address (from step 7 above). Protocol: SSH

Log in using username “waterlinedata” and password “waterlinedata”.

2. You may be prompted to continue connecting though the authenticity of the host cannot be established. Enter yes.

3. Start the embedded metadata repository database, Derby.

$ cd /opt/waterlinedata

$ bin/derbyStart

You'll see a response that ends with "...started and ready to accept connections on port 4444". Type Enter to return to the shell prompt.

3. Start the embedded web server, Jetty.

$ bin/jettyStart

The console fills with status messages from Jetty. Only messages identified by "ERROR" or "exception" indicate problems.

You are now ready to use the application and its sample data.

Page 6: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Exploring the sample cluster

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 6

Exploring the sample cluster

The Waterline Data Inventory sandbox is pre-populated with public data to simulate a set of users analyzing and manipulating the data. As you might expect among a group of users, there are multiple copies of the same data, standards for file and field names are not consistent, and data is not always wrangled into forms that are immediately useful for analysis. In other words, the data is intended to reflect reality.

Here are some entry points to help you use this sample data to explore the capabilities of Waterline Data Inventory:

Tags

Tags help you identify data that you may want to use for analysis. When you place tags on fields, Waterline Data Inventory looks for similar data across the profiled files in the cluster and suggests your tags for other fields. Use the tags you enter and automatically suggested tags in searches and search filtering with facets.

In the sample data, look for tags for "Food Service" data.

Page 7: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Exploring the sample cluster

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 7

Lineage relationships, landings, and origins

Waterline Data Inventory uses file metadata and data to identify cluster files that are related to each other. It finds copies of the same data, joins between files, and horizontal and vertical subsets of files. If you mark the places where data comes into the cluster with "Landing" labels, Waterline Data Inventory propagates this information through the lineage relationships to show the origin of the data.

In the sample data, look for origins for "data.gov," "Twitter," and "Restaurant Inspections."

Page 8: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Shutting down the cluster

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 8

Searching with facets

Use the Global Search text box on the top of the page to do keyword searches across your cluster metadata, including searching on file and field names, tags and tag descriptions, 50 examples of the most frequent data in each field. Waterline Data Inventory also provides search facets on common file and field properties, such as file size and data density. Some of the most powerful facets are those for tags and origins. Use the facet lists on the Advance Search page to identify what kind of data you want to find. Then use facets in the left pane to refine the search results further.

In the sample data, use "Food Service" tags in the Advance Search page, then filter the results by origin, such as "Restaurant Inspections".

Shutting down the cluster

To make sure you can restart the cluster cleanly, follow these steps to shut it down:

1. Shut down the cluster.

Choose Machine > Close > ACPI Shut Down. If you don't see this option, press the Option key while opening the menu.

Page 9: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Accessing the Hadoop cluster using SSH

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 9

Accessing the Hadoop cluster using SSH

To run Waterline Data Inventory jobs and to upload files in bulk to HDFS, you will want to access the guest machine using a command prompt or terminal on your host computer through a Secure Shell (SSH) connection. Alternatively, you can use the terminal in the guest VirtualBox window, but that can be awkward.

1. Start an SSH session.

(Mac OSX) In a terminal window, start an SSH session using the IP address provided for the guest instance (step 7 on page 4) and the username waterlinedata, all lower case:

$ ssh waterlinedata@<guest IP address> -p2222

or

$ ssh waterlinedata@localhost -p2222

(Windows) Start an SSH client such as PuTTY and identify the connection parameters: Host name: the guest IP address (step 7 on page 4). Protocol: SSH

Log in using username “waterlinedata” and password “waterlinedata”.

2. You may be prompted to continue connecting though the authenticity of the host cannot be established. Enter yes.

Loading data into HDFS

Loading data into HDFS is a two stage process: first you load data from its source— such as your local computer or a public website—to the guest file system. Then you copy the data from the guest file system into HDFS. For a small number of files, the Hadoop utility Hue makes this process very easy by allowing you to select files from the host computer and copy them directly into HDFS. For larger files or large numbers of files, you may decide to use a combination of an SSH client (to move files to the guest machine) and a command-line operation (to move files from the guest file system to HDFS). If you have a shared directory configured between the host and guest, you can access the files directly from the guest.

Using Hue to load files into HDFS

To access Hue from a browser on the host computer:

http://<cluster IP address>:<Hue port></filebrowser/

For example,

http://localhost:8000/filebrowser

Sign in to Hue as hue with the password 1111.

Page 10: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Loading data into HDFS

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 10

The following controls on the Hue File Browser page may be useful:

Control Description

Home “Home” in Hue is /user/hue. Use the navigation controls to go to other user directories.

New > Directory Create a new directory inside the current directory. Feel free to create additional /user directories. Note: Avoid adding directories above /user because it complicates accessing these locations from the Linux command line.

Upload > Files Hue allows you to use your local file system to select and upload files. Note: Avoid uploading zip files unless you are familiar with uncompressing these files from inside HDFS.

Move to Trash > Delete Forever

“Trash” is just another directory in HDFS, so moving files to trash does not remove them from HDFS.

Loading files into HDFS from a command line

Copying files to HDFS is a two-step process requiring an SSH connection:

1. Make the data accessible from guest machine.

There are several ways to do this: Use an SSH client such as PuTTY, FileZilla, or CyberDuck. Use secure copy (scp). Configure a shared directory in the VirtualBox settings for the VM.

2. From inside an SSH connection, use the Hadoop file system command copyFromLocal to move files from the guest file system into HDFS.

The following steps describe using scp to copy files into the guest. Skip to step 5 if you chose to use a GUI SSH client to copy the files. These instructions have you use separate terminal windows or command prompts to access the guest machine using two methods:

(Guest) indicates the terminal window or command prompt with an open SSH connection.

(Host) indicates the terminal window or command prompt that uses scp directly.

Page 11: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Loading data into HDFS

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 11

To copy files from the host computer to HDFS on the guest:

1. (Guest) Open an SSH connection to the guest.

See Accessing the Hadoop cluster using SSH.

2. (Guest) Create a staging location for your data on the guest file system.

The SSH connection working directory is /home/waterlinedata. From here, create a directory for your staged data:

$ mkdir /data

3. (Guest) If needed, create HDFS directories into which you will copy the files.

Create the directories using Hue or using the following command inside an SSH connection:

$ hadoop fs -mkdir <HDFS path>

For example, to create a new directory in the Landing directory:

$ hadoop fs -mkdir /user/waterlinedata/NewStagingArea

4. (Host) In a separate terminal window or command prompt, copy directories or files from host to guest.

Navigate to the location of the data to copy on the host and run the scp command:

$ scp -r ./<directory or file> waterlinedata@<cluster IP address>:<Linux destination>

For example (all on one line):

$ scp -r ./NewData waterlinedata@localhost:/home/waterlinedata/data -p2222

$ scp -r ./NewData [email protected]:/home/waterlinedata/data

5. (Guest) Back in the SSH terminal window or command prompt, copy the files from guest file system to the cluster using the HDFS command copyFromLocal.

Navigate to the location of the data files you copied in step 4 and copy the files into HDFS using the following command:

$ hadoop fs -copyFromLocal <localdir> <HDFS dir>

For example (all on one line):

$ hadoop fs -copyFromLocal /home/waterlinedata/data/

/user/waterlinedata/NewStagingArea/

Page 12: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Running Waterline Data Inventory jobs

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 12

Running Waterline Data Inventory jobs

Waterline Data Inventory format discovery and profiling and tag propagation jobs are MapReduce jobs run in Hadoop. These jobs populate the Waterline Data Inventory repository with file format and schema information, sample data, and data quality metrics for files in HDFS and Hive.

Lineage discovery, collection discovery, and origin propagation jobs are jobs run on the edge node where Waterline Data Inventory is installed. These jobs use data from the repository to suggest relationships among files, to suggest additional tag associations, and to propagate origin information.

Waterline Data Inventory jobs are run on a command line on the computer on which Waterline Data Inventory is installed. The jobs are started using scripts located in the bin subdirectory in the installation location. For the VM, the installation location is /opt/waterlinedata.

If you are running Waterline Data Inventory jobs in a development environment, consider opening two separate command windows: one for the Jetty console output and a second to run Waterline Data Inventory jobs.

Page 13: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Running Waterline Data Inventory jobs

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 13

Command Description

Full HDFS Processing

$ waterline profile <HDFS dir>

Performs the initial profile of your cluster or on a regular interval to profile new and updated files. This command triggers profiling as well as the discovery processes that use profiling data. Use the directory you specify here to set the scope of the profiling job. When you’ve profiled the entire cluster (or enough to provide enough profiling information), you are ready to run the lineage discovery command.

HDFS Profiling

$ waterline profileOnly <HDFS dir>

Profiles cluster content. Use this command after you’ve added files to the cluster but you aren’t ready to have Data Inventory suggest tags for the data. Example:

$ waterline profileOnly /user/waterlinedata/Landing

Full Hive Processing

$ profileHive default

/user/waterlinedata/.hivetmp

Full profile and discovery of the tables in the indicated Hive databases (“default” in the case of the sandbox). Indicate more than one database with a comma-separated list. To specify individual tables, user the property waterlinedata.profile.hivenamefilter with a regular expression as an override.

Hive Profiling

$ profileHiveOnly default

/user/waterlinedata/.hivetmp

Profiling of the tables in the indicated Hive databases (“default” in the case of the sandbox). No discovery processes are run.

Tag Propagation

$ waterline tag

Propagates tags across the cluster. Use this command when you know that you haven’t added new files since the last profile but you have tags and tag associations that you want Data Inventory to consider for propagation.

Lineage Discovery

$ waterline runLineage

Discovers lineage relationships and propagates origin information. Use this command when you have marked folders or files with origin labels and want that information propagated through the cluster. Include this command after the full profile for regular cluster profiling.

Page 14: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Monitoring Waterline Data Inventory jobs

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 14

Monitoring Waterline Data Inventory jobs

Waterline Data Inventory provides a record of job history in the Dashboard of the browser application.

In addition, you can follow detailed progress of each job on the console where you run the command.

Monitoring Hadoop jobs

When you run the “profile” command, you’ll see an initial job for format discovery followed by one or more profiling jobs. There will be at least one profiling job running in parallel for each file type Data Inventory identifies in the format discovery pass.

The console output includes a link to the job log for the running job. For example:

2014-09-20 18:17:27,048 INFO [WaterlineData Format Discovery Workflow V2] mapreduce.Job

(Job.java:submit(1289)) - The url to track the job:

http://sandbox.hortonworks.com:8088/proxy/application_1913847052944_0004/

Page 15: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBox Monitoring Waterline Data Inventory jobs

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 15

While the job is running, you can follow this link to see the progress of the MapReduce activity.

Alternatively, you can monitor the progress of these jobs using Hue in a browser:

For Hortonworks distributions:

http://<cluster IP address>:8000/jobbrowser

You’ll need to specify the “waterlinedata” user.

Monitoring local jobs

After the Hadoop jobs complete, Waterline Data Inventory runs local jobs to process the data collected in the repository. You can follow the progress of these jobs by watching console output in the command window in which you started the job.

Debugging information

There are multiple sources of debugging information available for Data Inventory. If you encounter a problem, collect the following information for Waterline Data support.

Job messages on the console

Waterline Data Inventory generates console output for jobs run at the command prompt. If the job encounters problems, you would review the console output for clues to the problem. To report errors to Waterline Data support, you would copy this output into a text file or email to help us follow what occurred:

/opt/waterlinedata/bin/waterline profile <HDFS directory>

These messages appear on the console but are collected in a log file with debug logging level:

/var/log/waterline/wdi-inventory.log

Web server console output

The embedded web server, Jetty, produces output corresponding to user interactions with the browser application. These messages appear on the console but are collected in a log file with debug logging level:

/var/log/waterline/waterlinedata.log

Use tail to see the most recent entries in the log:

tail -f /var/log/waterline/wdi-ui.log

Advanced Search results

From inside Waterline Data Inventory, use the Advance Search to identify files that failed to profile. Choose the profile status you are interested in from the Profile Status search facet.

Page 16: Waterline Data Inventory Sandbox Setup Guide for HDP 2.2 ... · collected metadata and Data Inventory’s discovered relationships. This document describes running Waterline Data

Sandbox Setup Guide for HDP 2.2 and VirtualBoxConfiguring additional Waterline Data Inventory functionality

© 2014 - 2015 Waterline Data, Inc. All rights reserved. 16

Configuring additional Waterline Data Inventory functionality

Waterline Data Inventory provides a number of configuration settings and integration interfaces to enable extended functionality. Refer to Waterline Data Inventory Installation and Administration Guide for details:

support.waterlinedata.com/hc/en-us/articles/205840116-Waterline-Data-Inventory-v2-0-Documents

Accessing Hive tables

Waterline Data Inventory makes it easy to create Hive tables from files in your cluster. You can access the Hive instance on the guest through Hue or by connecting to Hive from other third-party query or analysis tools.

Viewing Hive tables in Hue

You can access the Hive tables in your cluster through Hue using the Beeswax query tool:

http://<cluster IP address>:8000/beeswax

Connecting to the Hive datastore

To access Hive tables from Tableau, Qlik Sense, or other analysis tool, you’ll need to configure a connection to the Hive datastore on the cluster. For a Waterline Data-supplied cluster, use the following connection information:

Parameter Value

Server Use the same server IP address as you use for Waterline Data Inventory

Port 10000

Server Type HiveServer2

Authentication Username and Password

Username hive

Password hive