veera cynix
@veeracynixit
1 Follower
2 Following

The Components of TM1

Introduction

Use the Customer Service case timeline with other applications

You can use the Customer Service case timeline with other ServiceNow applications by creating a configuration for each application and adding the ResolutionShaper field to the desired form. Procedure :- 1. Ensure that the Customer Service plugin (com.sn_customerservice) has been activated. 2. Navigate to the Resolution Shaper Configs page (<instance>sys_resolutionshaper_config_list.do) and click New. 3. Select a table in the Task Table field. 4. Add the desired states in the Requestor States field using a comma-separated list. For example, (New, Active, Resolved, Closed). 5. Make any necessary changes to the remaining fields and click Submit. 6. Navigate to the desired form. 7. Right-click the form header...

The IBM Cognos TM1 programming environment

The API is intended to give access to many of the features and functionality of the TM1 OLAP engine. The API is designed for use with Microsoft C, C++ and Microsoft Visual Basic. All the C functions are supported by Microsoft Visual Basic. However, some of the string and array value handling functions have a special version for Microsoft Visual Basic. You should use these functions if you are programming in Microsoft Visual Basic. For C and C++ applications, include the header file TM1API.H. For Microsoft Visual Basic applications, include the file TM1API.BAS. The API has been optimized for use in a networked environment. As a result, the conventions used vary markedly from those used in other Application Programming Interfaces you may...

Using multiple Layouts and Views to design a GUI

As we have studied about different Views, ViewGroups and Layouts in the previous tutorials, now it's time to study how to use all of them together in our android project to design great user interfaces. In this tutorial, we will learn how we can put different layouts, views and viewgroups inside another layout(hierarchical arrangement) to design the perfect GUI for your android application.

Use Cognos 10 Business Insight-Preview and highlights

Cognos Business Insight is the new tool from IBM into the business intelligence family of the cognos. It is more a workspace than a tool which has all types of data to be accessed by the business users. It has dashboarding like look which gives the provision for the BUs to assemple different type of data. They can view any type of information from this Business insight wether those are charts or graphs or any other performance metrics.

Explain the process of distributed data using Spark

Distributed data processing refers to the distribution of computer networks
across different locations where computer systems interconnected &amp; share data.
Apache Spark is an open-source general distributed data processing engine with
capacity to handle heavy volumes of data.
Moreover, it supports different types of resources or cluster managers. Such as
Standalone, Kubernetes, Apache Mesos &amp; Apache Hadoop YARN (Yet Another
Resource Negotiator).
It includes an extensive set of libraries and APIs and supports different
programming languages like Java, Scala, Python, R, etc. Moreover, its flexibility
makes it suitable for a wide range of use cases.
Apache Spark is also useful with distributed data stores like MapR XD, Hadoop’s
HDFS, etc. And with popular NoSQL databases like MapR Database, Apache
HBase, and MongoDB. And it also used with distributed messaging stores like
MapR Event Store and Apache Kafka.
To learn big data course visit OnlineITGuru's big data and hadoop online training Blog

Concepts of Hadoop and installing it in Hadoop cluster environment

Apache Hadoop is one of the open-source software most commonly used to make sense of Big Data.
Every company needs to make sense of the data on an ongoing basis in today&#39;s digitally powered world.
Hadoop is a whole ecosystem of Big Data resources and technologies, commonly used to store and
process big data.
To learn more tutorials visit OnlineITGuru's blog big data and hadoop course
The architecture can be split into two parts, i.e. the core components of Hadoop and the
complementary or other components.
Architecture of a Hadoop
There are four main or basic components.
● Hadoop Common:
This is a compilation of common utilities and libraries that manage other modules in Hadoop. This
ensures the Hadoop cluster automatically handles the hardware failures.
● HDFS:
It is a Hadoop Distributed File System that stores and distributes data over the Hadoop cluster in the
form of small memory blocks. To ensure consistency of the data, each data is repeated several times.
● Hadoop YARN:
It allocates resources that in turn allow different users to execute different applications without
worrying about the increased workloads.
● Hadoop MapReduce:
By spreading the data as small blocks, it performs tasks in parallel fashion.
Additional or Other Hadoop Elements
Ambari:
Ambari is a web-based platform for the management, configuration and testing of Big Data Hadoop
clusters to support components like HDFS, MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig
and Sqoop. It offers a Hadoop cluster health monitoring console as well as allows user-friendly
assessment of the performance of certain components including MapReduce, Pig, Hive, etc.
Cassandra:This is an open-source, highly scalable distributed NoSQL-based database system dedicated to managing
large quantities of data across numerous commodity servers, eventually leading to high availability
without a single fail.
Flume:
Flume is a distributed and secure tool to collect, consolidate and efficiently transfer the bulk of
streaming data into HDFS.
HBase:
HBase is a distributed, non-relational database running on the Big Data Hadoop Hadoop cluster, which
stores vast volumes of structured data. It serves as an input for jobs in MapReduce.
HCatalog:
It&#39;s a table and storage management layer that allows developers to access and exchange data.
Hive:
Hive is a data storage platform allowing data to be compiled, queried, and analyzed using a SQL-like
query language.
Oozie:
Oozie is a server-based program that handles the Hadoop jobs and schedules them.
Pig:
A dedicated high-level tool, Pig is in charge of manipulating data stored in HDFS with the aid of a
MapReduce compiler and a language named Pig Latin. It helps analysts to collect, transform , and load
the data (ETL) without MapReduce writing codes.
Solr:
A search method that can be highly scaled, Solr allows indexing, central setup, failures and recovery.
Spark:
An open source fast engine responsible for SQL streaming and supporting Hadoop, machine learning and
graph processing.
Sqoop:
It&#39;s a system between Hadoop and organized databases to move massive quantities of data.ZooKeeper:
ZooKeeper configures and synchronizes the distributed systems with an open source program.
Install Hadoop in Hadoop cluster environment
You can learn about Downloading Hadoop in this segment. You need to first download Hadoop which is
an open-source tool to function in the Hadoop environment. Hadoop installation can be performed free
of charge on any system, as the software is available as an open source resource. There are however
some device specifications that need to be met for an effective installation of the Hadoop application
such as.

Running your First Android App on Emulator or Device

Edit configuration while Running Android Application

Now we are going to Run our first android application on one of the created emulator. So if there is no AVD started yet, go to the AVD manager and start an AVD (virtual device).

Important concepts Of Android

Let’s start with most basic android concepts which some of us may already know.

Understand: How the components of the Hadoop ecosystem fit in with the data processing lifecycle?

Hadoop is a framework under Big Data, a collection of huge data sets helps in the
processing of these heavy data sets. It consists of various modules supported by a
large ecosystem of different technical elements. In this context, the Hadoop
Ecosystem is a powerful platform or suite that provides resolutions to various Big
Data issues. There are several components of the Hadoop Ecosystem that have
been deployed by various organizations for various services. Moreover, these
components of the Hadoop Ecosystem are developed to deliver an explicit
function.
In this article, we will come to know about the different components of the
Hadoop ecosystem and its usefulness in the data processing lifecycle.
To more information visit our ITGuru's big data hadoop course Blog
Components of the Hadoop ecosystem
There are four major components of Hadoop such as HDFS, YARN, MapReduce &amp;
Common utilities. But some other components collectively form a Hadoop
ecosystem that serves different purposes. These are;
 HDFS
 YARN
 Spark
 MapReduce
 Hive
 Hbase
 Pig
 Mahout, Spark MLib
 Zookeeper
 Oozie
 Flume
 Sqoop
 Solr
 AmbariLet’s discuss the above-mentioned Hadoop ecosystem components in detail.
HDFS
HDFS or Hadoop Distributed File System is the major component of the Hadoop
ecosystem. It is responsible for storing large data sets inclusive of structured or
unstructured data. Moreover, it stores them across different nodes and also
manages the metadata in the form of log files.
The core components of HDFS are as follows;
 Name Node
 Data Node
NameNode is the primary node that includes metadata of all the blocks within the
cluster. It also manages the Data Node that stores the actual data. The Data
Nodes are commodity hardware in the distributed ecosystem that runs on the
slave machine. Moreover, it makes the Hadoop ecosystem cost-effective.
HDFS works at the heart of the system by maintaining all the coordination among
the clusters and hardware. It helps in the data processing lifecycle as well.
MapReduce
It is one of the core data processing components of the Hadoop ecosystem.
MapReduce is a software framework that helps in writing applications by making
the use of distributed and parallel algorithms to process huge datasets within the
Hadoop ecosystem. Moreover, it transforms big data sets into an easily
manageable file. MapReduce also takes care of failures of systems by recovering
data from another node in the event of break down.
There are two important functions of MapReduce, namely Map() and Reduce().
Map() – function performs different actions like sorting, grouping, and filtering of
data. Besides, it organizes this data in the form of a group. It takes in key-value
pairs and generates the results as key-value pairs.Reduce() – function aggregates the mapped data. Moreover, the Reduce()
function takes the results generated by the Map() as input and makes together
those tuples into smaller sets of tuples.
YARN
YARN or Yet Another Resource Negotiator is considered as the brain of the
Hadoop ecosystem. It helps to manage resources across clusters and performs the
processing jobs like scheduling and resource allocation. YARN has two major kinds
of components: Resource &amp; Node managers.
 Resource Manager: This is the major node in the data processing
department. Therefore, it receives process requests &amp; distributes resources
for the applications within a system and schedules map-reduce jobs.
 Node Manager: These are installed on the DataNode that works in the
allocation of resources. Such as CPU, memory, bandwidth per system, and
monitors their usage &amp; activities.
 Application Manager: It acts as an interface between the Resource and
Node Managers and communicates as required. Moreover, it is the
component of the Resource Manager. Another component of the Resource
Manager is Scheduler.
Spark
Spark is al platform that unifies all kinds of Big Data processing like batch
processing, interactive or real-time processing, and visualization, etc. It includes
several built-in libraries for streaming, SQL, ML, and graph processing purpose.
Moreover, Spark provides a lightning-fast performance for batch and stream
processing. It also handles the process of consumptive tasks like above.
Apache Spark consumes in-memory resources as well, thus being faster in terms
of optimization.
HIVE
Hive is based-out of SQL methodology and interface and its query language are
known as HQL. The Hive supports all types of SQL data that makes the queryprocessing simpler &amp; easier. Moreover, the Hive comes with two basic
components: Such as JDBC Drivers and the HIVE Command-Line. It is highly
scalable and it allows both real-time and batch processing facilities. Furthermore,
the HIVE also executes various queries by using MapReduce. Hence, a user
doesn’t need to write any code in low-level MapReduce.
PIG
Pig works on a pig Latin language, a Query processing language similar to SQL. It
structures the data flow, processes, and analyzes large data sets stored in HDFS.
Pig does the execution of commands and also takes care of all the MapReduce
activities. Later the processing ends, PIG stores the output in HDFS. Pig includes
specially designed components like Pig Runtime &amp; Pig Latin.
Mahout:
Mahout provides a platform that allows Machine Learning ability to a system or
application. Machine learning helps the system to develop itself based on some
past data or patterns, user interaction, or based on algorithms. Moreover, it
provides different types of libraries that are nothing but the concepts of Machine
learning. These are collaborative filtering, clustering, and classification.
bIt&#39;s a NoSQL database built on top of the HDFS system. It supports all kinds of data
and provides the capabilities of Google’s Big Table. Thus, it can work on Big Data
sets very effectively. Moreover, HBase is an open-source and distributed
database. It provides real-time read/write access to big data sets efficiently.
There are two major components of HBase such as:
 HBase Master
 Region ServerZookeeperThere was a huge problem of managing coordination and synchronization among
the different components of Hadoop that resulted in inconsistency. Zookeeper
overcomes all these problems by performing synchronization, inter-component
communication, grouping, and so on.
Ambari
The component Ambar is responsible for managing, monitoring, and securing the
Hadoop cluster effectively.
Hue
Hue is the full form for Hadoop User Experience. It’s an open-source web
interface for Hadoop &amp; it performs the following operations:
 Upload the data and browse it.
 Table queries in HIVE and Impala
 Moreover, Hue makes Hadoop easier to use.
Sqoop
Sqoop is one of the components of Hadoop that imports data from external
sources into the Hadoop Ecosystem components. Such as; HDFS, Hive, HBase, and
many more. It helps to transfer data from Hadoop to other external sources and it
also works with RDBMS like Teradata, Oracle, MySql, etc.
Flume
Flume is a distributed, reliable, and available component service for efficiently
collecting, and moving huge amounts of streaming data from different web
servers into HDFS. Moreover, it has three different components: Source, channel,
and sink.
Oozie:
It simply performs the task of a scheduler that schedules various jobs and binds
them together as a single unit.Big Data processing lifecycle
Big Data processing lifecycle includes four different stages: Ingest, Processing,
Analyze, and Access. Each stage has a different strategy and each stage includes
the usage or help of components of the Hadoop ecosystem. Let us elaborate
them in detail.
Ingest
This is the first stage of Big Data processing. Here, the data is ingested or
transferred to Hadoop from different sources like relational databases, systems,
or local storage files. Moreover, in this stage the component Sqoop transfers data
from RDBMS to HDFS and Flume transfers event data.
Processing
Processing is the second stage in this lifecycle where the data is stored and
processed. The data is stored in the HDFS, and the NoSQL distributed data, HBase,
etc. Spark and MapReduce perform a data processing job at this stage.
Analyze
Analyzing is the third stage where the data is analyzed by processing different
frameworks like Pig, Hive, and Impala.
Here, the component Pig converts the data by using a Map and Reduce and then
analyzes it. Moreover, the Hive is also based on the map and reduces
programming. This is most suitable for structured data much effectively.
Access
The fourth &amp; final stage in this lifecycle is Access performed by tools such as Hue
and Cloudera Search. In the Access stage, the analyzed data can be accessed by
users and clients as well.
ConclusionThus, we reach to a conclusion in this article where we learned about How the components of the Hadoop ecosystem fit in with the data processing lifecycle.
Learn more from big data training.