Q21. Which object can be used to get the progress of a particular Jon?

Ans: Context

Q22. What is next step after Mapper or MapTask?

Ans : The output of the Mapper are sorted and Partitions will be created for the output. Number ofpartition depends on the number of reducer.

Q23. How can we control particular key should go in a specific reducer?

Ans: Users can control which keys (and hence records) go to which Reducer by implementing acustom Partitioner.

Q24. What is the use of Combiner?

Ans: It is an optional component or class, and can be specify via Job.setCombinerClass(ClassName), to perform local aggregation of the intermediateoutputs, which helps to cut down the amount of data transferred from the Mapper to the Reducer.

Q25. How many maps are there in a particular Job?

Ans: The number of maps is usually driven by the total size of the inputs, that is, the total numberof blocks of the input files. Generally it is around 10-100 maps per-node. Task setup takes awhile,so it is best if the maps take at least a minute to execute. Suppose, if you expect 10TB of input dataand have a blocksize of 128MB, you'll end up with 82,000 maps, to control the number of block you can use the mapreduce.job.maps parameter (which only provides a hint tothe framework). Ultimately, the number of tasks is controlled by the number of splits returned by theInputFormat.getSplits() method (which you can override).

Spark Professional Training   Spark SQL Hands Training   PySpark : HandsOn Professional Training    Apache NiFi (Hortonworks DataFlow) Training   Hadoop Professional Training   Cloudera Hadoop Admin Training Course-1  HBase Professional Traininghttp  SAS Base Certification Hands On Training OOzie Professional Training     AWS Solution Architect : Training Associate

Q26. What is the Reducer used for?

Ans: Reducer reduces a set of intermediate values which share a key to a (usually smaller) set of values. The number of reduces for the job is set by the user via Job.setNumReduceTasks(int).

Q27. Explain the core methods of the Reducer?

Ans: The API of Reducer is very similar to that of Mapper, there's a run() method that receives aContext containing the job's configuration as well as interfacing methods that return data from thereducer itself back to the framework. The run() method calls setup() once, reduce() once foreach key associated with the reduce task, and cleanup() once at the end. Each of these methodscan access the job's configuration data by using Context.getConfiguration().

As in Mapper, any or all of these methods can be overridden with custom implementations. If noneof these methods are overridden, the default reducer operation is the identity function; values arepassed through without further processing.

The heart of Reducer is its reduce() method. This is called once per key; the second argumentis an Iterable which returns all the values associated with that key.

Q28. What are the primary phases of the Reducer?

Ans: Shuffle, Sort and Reduce

Q29. Explain the shuffle?

Ans: Input to the Reducer is the sorted output of the mappers. In this phase the framework fetches the relevant partition of the output of all the mappers, via HTTP.

Q30. Explain the Reducer’s Sort phase?

Ans: The framework groups Reducer inputs by keys (since different mappers may have output thesame key) in this stage. The shuffle and sort phases occur simultaneously; while map-outputs are being fetchedthey are merged (It is similar to merge-sort).

Premium Training : Spark Full Length Training : with Hands On Lab

AWS Exam Prepare : Kinesis Data Stream   Free Core Java 1Z0-808 Training   Scala Professional Training   Python Professional Training  Read Spark SQL Fundamental and Cookbookhttps://sites.google.com/training4exam.com/spark-sql-2-x-fundamentals/  Book : AWS Solution Architect Associate : Little Guide  NiFi CookBook By HadoopExam  AWS Security Specialization Certification: Little Guide SCS-C01     Spark Interview Questions