What is writable in Hadoop?



A. Writable is a java interface that needs to be implemented for streaming data to remote servers.
B. Writable is a java interface that needs to be implemented for HDFS writes.
C. Writable is a java interface that needs to be implemented for MapReduce processing.
D. None of these answers are correct.

When is the reducers are started in a MapReduce job?


In a MapReduce, job reducers do not start executing and reduce method until the all Map jobs have been completed. Reducers start copying intermediate key-value pairs from the mappers ASAP they are available. The programmer defined reduce method is called only after all the mappers have done there work. If reducers do not start before all mappers finish then why does the progress on MapReduce job shows something like Map(50%) Reduce(10%)?

Why reducers progress percentage is displayed when mapper is not finished yet?

 Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The progress calculation also takes in account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any intermediate key-value pair for a mapper is available to be transferred to reducer. Though the reducer progress is updated still the programmer defined reduce method is called only after all the mappers have finished. 

How many Daemon processes run on a Hadoop system?

Hadoop is comprised of five separate daemons. Each of these daemon run in its own JVM. Following 3 Daemons run on Master nodes NameNode - This daemon stores and maintains the metadata for HDFS. Secondary NameNode - Performs housekeeping functions for the NameNode. JobTracker - Manages MapReduce jobs, distributes individual tasks to machines running the Task Tracker. Following 2 Daemons run on each Slave nodes DataNode – Stores actual HDFS data blocks. TaskTracker - Responsible for instantiating and monitoring individual Map and Reduce tasks.


Job tracker and Task tracker to Client application

Client applications submit jobs to the Job tracker. The JobTracker talks to the NameNode to determine the location of the data The JobTracker locates TaskTracker nodes with available slots at or near the data The JobTracker submits the work to the chosen TaskTracker nodes. The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker. A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable. When the work is completed, the JobTracker updates its status. Client applications can poll the JobTracker for information.

Hadoop-based SQL & Big Data Analytics Solution


QueryIO is a Hadoop-based SQL and Big Data Analytics solution, used to store, structure, analyze and visualize vast amounts of structured and unstructured Big Data. It is especially well suited to enable users to process unstructured Big Data, give it a structure and support querying and analysis of this Big Data using standard SQL syntax.

QueryIO enables you to leverage the vast and mature infrastructure built around SQL and relational databases and utilize it for your Big Data Analytics needs.

QueryIO builds on Apache Hadoop's scalability and reliability and enhances basic Hadoop by adding data integration services, cluster management and monitoring services as well as big data querying and analysis services. It makes it easy to query and analyze big data across hundreds of commodity Compute+Store cluster nodes and petabytes of data in an easy and logical manner.

How is Hadoop architected?



Mike Olson: Hadoop is designed to run on a large number of machines that don’t share any memory or disks. That means you can buy a whole bunch of commodity servers, slap them in a rack, and run the Hadoop software on each one. When you want to load all of your organization’s data into Hadoop, what the software does is bust that data into pieces that it then spreads across your different servers. There’s no one place where you go to talk to all of your data; Hadoop keeps track of where the data resides. And because there are multiple copy stores, data stored on a server that goes offline or dies can be automatically replicated from a known good copy.

In a centralized database system, you’ve got one big disk connected to four or eight or 16 big processors. But that is as much horsepower as you can bring to bear. In a Hadoop cluster, every one of those servers has two or four or eight CPUs. You can run your indexing job by sending your code to each of the dozens of servers in your cluster, and each server operates on its own little piece of the data. Results are then delivered back to you in a unified whole. That’s MapReduce: you map the operation out to all of those servers and then you reduce the results back into a single result set.

Architecturally, the reason you’re able to deal with lots of data is because Hadoop spreads it out. And the reason you’re able to ask complicated computational questions is because you’ve got all of these processors, working in parallel, harnessed together.

What is a Jobtracker and tasktracker in hadoop?

There is one JobTracker(is also a single point of failure) running on a master node and several
tasktracker running on slave nodes. Each tasktracker has multiple task-instances running and every task tracker reports to jobtracker in the form of heart beat at regular intervals which also carries message of the progress of the current job it is executing and idle if it has finished executing. 
Jobtracker schedules jobs and takes care of failed ones by re-executing them on some other nodes. In Mrv2 efforts are made to have high availability for Jobtracker, which would definitely change the way it has been.

what type of data hadoop can handle

Hadoop can able to handle all types of data like stuctured, Un-Stuctured,
pictures, videos, telecom comminication records, log files etc.

Data Storage and Analysis in Hadoop


The problem is simple: while the storage capacities of hard drives have increased massively
over the years, access speeds—the rate at which data can be read from drives—
have not kept up. One typical drive from 1990 could store 1,370 MB of data and had
a transfer speed of 4.4 MB/s,4 so you could read all the data from a full drive in around
five minutes. Over 20 years later, one terabyte drives are the norm, but the transfer
speed is around 100 MB/s, so it takes more than two and a half hours to read all the
data off the disk.
This is a long time to read all data on a single drive—and writing is even slower. The
obvious way to reduce the time is to read from multiple disks at once. Imagine if we
had 100 drives, each holding one hundredth of the data. Working in parallel, we could
read the data in under two minutes.
Only using one hundredth of a disk may seem wasteful. But we can store one hundred
datasets, each of which is one terabyte, and provide shared access to them. We can
imagine that the users of such a system would be happy to share access in return for
shorter analysis times, and, statistically, that their analysis jobs would be likely to be
spread over time, so they wouldn’t interfere with each other too much.
There’s more to being able to read and write data in parallel to or from multiple disks,
though.
The first problem to solve is hardware failure: as soon as you start using many pieces
of hardware, the chance that one will fail is fairly high. A common way of avoiding data
loss is through replication: redundant copies of the data are kept by the system so that
in the event of failure, there is another copy available. This is how RAID works, for
instance, although Hadoop’s filesystem, the Hadoop Distributed Filesystem (HDFS),
takes a slightly different approach, as you shall see later.
The second problem is that most analysis tasks need to be able to combine the data in
some way; data read from one disk may need to be combined with the data from any
of the other 99 disks. Various distributed systems allow data to be combined from
multiple sources, but doing this correctly is notoriously challenging. MapReduce provides
a programming model that abstracts the problem from disk reads and writes,
transforming it into a computation over sets of keys and values. We will look at the
details of this model in later chapters, but the important point for the present discussion
is that there are two parts to the computation, the map and the reduce, and it’s the
interface between the two where the “mixing” occurs. Like HDFS, MapReduce has
built-in reliability.
This, in a nutshell, is what Hadoop provides: a reliable shared storage and analysis
system. The storage is provided by HDFS and analysis by MapReduce. There are other
parts to Hadoop, but these capabilities are its kernel.

AN EXAMPLE APPLICATION: WORD COUNT Hadoop Map reducing



A simple MapReduce program can be written to determine how many times different words appear in a set of files. For example, if we had the files:
foo.txt: Sweet, this is the foo file
bar.txt: This is the bar file
We would expect the output to be:
sweet 1
this 2
is 2
the 2
foo 1
bar 1
file 2
Naturally, we can write a program in MapReduce to compute this output. The high-level structure would look like this:
mapper (filename, file-contents):
for each word in file-contents:
emit (word, 1)

reducer (word, values):
sum = 0
for each value in values:
sum = sum + value
emit (word, sum)

The Hadoop Distributed File System



Introduction
HDFS, the Hadoop Distributed File System, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes), and provide high-throughput access to this information. Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications. This module introduces the design of this distributed file system and instructions on how to operate it.
Goals for this Module:
Understand the basic design of HDFS and how it relates to basic distributed file system concepts
Learn how to set up and use HDFS from the command line
Learn how to use HDFS in your applications

What is Hadoop - Overview


Overview of Hadoop : Hadoop MapReduce is a software framework for writing applications that process vast amounts of data (multi-petabyte datasets) in parallel on large clusters consisting of thousands of nodes of commodity hardware in a reliable, fault-tolerant manner.

A MapReduce job usually splits the input dataset into independent chunks which are processed by the map tasks in a parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a distributed filesystem.

Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (HDFS) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster.

Architecture for Hadoop in the Enterprise


What HDFS can not do?


 Low latency data access. It is not optimized for low latency data access it trades latency to increase the throughput of the data. ● Lots of small files. Since block size is 64 MB and lots of small files(will waste blocks) will increase the memory requirements of namenode. ● Multiple writers and arbitrary modification. There is no support for multiple writers in HDFS and files are written to by a single writer after end of each file.

List all the daemons required to run the Hadoop cluster

- NameNode
- DataNode
- JobTracker
- TaskTracker

Job Initialization in Hadoop


Job Initialization
● Puts the job in internal Queue
● Job Scheduler will pickup and initialize it
● Create a Job object and job being run
● Encapsulate its tasks ○ Book keeping info to track tasks status and progress
● Create list of tasks to run ● Retrieves number of input splits computed by the JobClient from the shared filesystem
● Creates one map task for each split. ● Scheduler creates the Reduce tasks and assigns them to taskTracker. ○ No. of reduce tasks is determined by the map.reduce.tasks.
● Tasks ID’s are given for each task

How does master slave architecture in the Hadoop?


The MapReduce framework consists of a single master JobTracker and multiple slaves, each cluster-node will have one TaskskTracker. The master is responsible for scheduling the jobs' component tasks on the
slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master.

What is MapReduce in Hadoop?


Map reduce is an algorithm or concept to process Huge amount of data in a faster way. As per its name you can divide it Map and Reduce. 

The main MapReduce job usually splits the input data-set into independent chunks. (Big data sets in the multiple small datasets)

MapTask: will process these chunks in a completely parallel manner (One node can process one or more chunks).

The framework sorts the outputs of the maps.

Reduce Task : And the above output will be the input for the reducetasks, produces the final result.

Your business logic would be written in the MappedTask and ReducedTask. Typically both the input and the output of the job are stored in a file-system (Not database). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks. 

What is Hadoop framework?


Hadoop is a apache framework developed completely in java.
Hadoop analyze and process large amount of data i.e peta bytes of data in parallel with less time located in distributed environment. 
In hadoop system, the data is distributed in thousands of nodes and processes parallely

What co-group does in Pig?


Co-group joins the data set by grouping one particular data set only. It groups the elements by their common field and then returns a set of records containing two separate bags. The first bag consists of the record of the first data set with the common data set and the second bag consists of the records of the second data set with the common data set.

What does FOREACH do?


FOREACH is used to apply transformations to the data and to generate new data items. The name itself is indicating that for each element of a data bag, the respective action will be performed.

Syntax : FOREACH bagname GENERATE expression1, expression2, …..
The meaning of this statement is that the expressions mentioned after GENERATE will be applied to the current record of the data bag.

What is a Task Tracker in Hadoop? How many instances of TaskTracker run on a Hadoop Cluster


A TaskTracker is a slave node daemon in the cluster that accepts tasks (Map, Reduce and Shuffle operations) from a JobTracker. There is only One Task Tracker process run on any hadoop slave node. Task Tracker runs on its own JVM process. Every TaskTracker is configured with a set of slots, these indicate the number of tasks that it can accept. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker. The TaskTracker monitors these task instances, capturing the output and exit codes. When the Task instances finish, successfully or not, the task tracker notifies the JobTracker. The TaskTrackers also send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated.

List all the daemons required to run the Hadoop cluster

Below are the list all the daemons required to run the Hadoop cluster
- NameNode
- DataNode
- JobTracker
- TaskTracker

What is Hadoop framework?


Hadoop framework provides a facility to store large and large amounts of data with almost no breakdown while querying. It breaks the file into pieces, copies it multiple times (3 default) and stores it on different machines. Accessibility is ensured even if any machine breaks down or is thrown out from network.
One can use Map Reduce programs to access and manipulate the data. The developer need not worry where the data is stored, he/she can reference the data from a single view provided from the Master Node which stores all metadata of all the files stored across the cluster.

what is a datanode in Hadoop Big Data?


Data node is what where actual data resides in the Hadoop HDFS system. For the same meta info is maintained at Name node, which chunk is in which node

How to Install Pig in Hadoop ?

How to Install Pig in Hadoop ?

To install   Pig on Ubuntu and Other Debian Systems,

$ sudo apt-get install hadoop-pig


To install Pig on Red Hat-Compatible Systems:

$ sudo yum install hadoop-pig

To install Pig on Suse Systems:

$ sudo zypper install hadoop-pig



When installing Pig, Pig automatically use the active hadoop Configuration.

After doing pig installation , you can start the Grant Shell

To Start the Grant Shell :

$ export PIG_CONF_DIR=/USR/LIB/PIG/CONF
$ PIG
...
grunt>

What is Job Tracker in Hadoop ?


What is Job Tracker in Hadoop ?


Job Tracker is the daemon ( processing ) service for submitting & tracking Mapreduce jobs

in hadoop. The Job Tracker is the single point of failure of the Map Reduce Service.

If that goes down, all jobs which are running will be halted. In Hadoop  Job Tracker performs

following actions.

a)Jobs will be submitted to the JOb Tracker by Client Applications.

b)Job Tracker talks to NameNode to determine the locatoon of the data.

c)JT ( Job Tracker ) locates Task Tracker nodes with available slots at or near the data.

d) JT Submits the work to the chosen task tracker nodes.

e) Than Tast Trackers will be monitored and if they do not submit heartbeat signals often

enough, they are deemed to have failed & the work is scheduled on a different Task Tracker.

Why we use Hadoop ?



Hadoop handles any data type,in any quantity


a) Structured, unstructured
b) Schema, no schema
c) High volume, low volume
d) All kinds of analytic applications

what is cluster in Big Data Hadoop



What is Cluster ?



A group of Similar elements gathered together closely. 

What type of Data Hadoop Can Handle ?

What type of Data Hadoop Can Handle ?


Hadoop can able to handle all types of data like stuctured, Un-Stuctured,
pictures, videos, telecom comminication records, log files etc.



What is Bid Data in Hadoop

What is Big Data ?



Big data is nothing  but  huge  amount of data.

Some of the huge data coming from the

Social Networking Sites, Banks data, Medical data, log data