This article provides an introduction to HDFS along with a guide on how to create an HDFS data output using Upsolver.
What is Apache Hadoop?
Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation.
It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.
What is HDFS?
Hadoop Distributed File System (HDFS) is a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster.
As HDFS provides better data throughput than traditional file systems, in addition to high fault tolerance and native support of large datasets, it is used to scale a single Apache Hadoop cluster to hundreds (and even thousands) of nodes.
Create an HDFS data output
1. Go to the Outputs page and click New.
2. Select HDFS as your output type.
3. Name your output and select whether the output should be Tabular or Hierarchical.
After adding your Data Sources, click Next.
9. Through the Filters tab, add a filter like WHERE in SQL to the data source.
10. Click Make Aggregated to turn the output into an aggregated output.
Read the warning before clicking OK and then add the required aggregation.
This aggregation field will then be added to the Schema tab.
11. In the Aggregation Calculated Fields area under the Calculated Fields tab, add any required calculated fields on aggregations.
See:Functions, Aggregation Functions
Click Preview at any time to view a preview of your current output.