# Beginners guide to ARIMA: Problem Identification & Data Gathering – Part 2

In continuation to my earlier post, I’m trying to explore ARIMA using an Example. In this post we will go into each step in detail how we can accomplish ARIMA based forecasting for a problem.

Step 1: Problem Identification or Our Scenario

We are going to consider the past history of time series data on the Household Power consumption and use that data to forecast using ARIMA. There is also a research paper published in Proceedings of the International MultiConference of Engineers and Computer Scientists 2013 Vol I, on the same dataset analysing the performance between ARMA and ARIMA. Our post will focus on step by step accomplishing forecast using R on the same dataset for ease of use for Beginners. Sooner or later we will evaluate tools such as AutoBox, R which can be used for solving this problems.

Step 2: Data Gathering or Identification of dataset

The dataset we are going to use would be a dataset on Individual household electric power consumption available in UCI Repository under the URL: https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption. This dataset is a multivariate dataset. Please check this link to understand the difference between univariate, bivariate and multivariate.

Quick Summary of the Dataset:

• Dataset contains data between December 2006 and November 2010
• It has around 19.7 MB of Data
• File is in .TXT Format
• Columns in the Dataset are:
• Date (DD/MM/YYYY)
• Time (HH:MM:SS)
• Global active power
• Global Reactive power
• Voltage
• Global Intensity
• Sub Metering 1
• Sub Metering 2
• Sub Metering 3

You can open this semicolon delimited text file in Excel and make the necessary steps on the wizard you will be having an excel sheet with the data as given below. I was able to load rows only up to 1048576. The actual total number of rows in the text file is 2075260. Whoa..

Next to do the Step 3: preliminary analysis we can use R as a tool. For using R as a tool we need to load this data into R and for analysing it. For this I can save this excel sheet in CSV format or in XLS format and the import into R as outlined in my other post or using this link. I’m using RStudio for the purpose and demonstrating the data loading process in the screenshots in the subsequent sections.

First the installation had shown some error, after that in the subsequent attempt the installation of gdata was successful. Now we can load the library using the command library(gdata). After we which we have loaded powerData variable with the data available in the CSV file for further analysis and we can view the data using View. Please check the console window for the code.

In the next post we will do some preliminary analysis on this data which we have loaded.

# Beginners guide to ARIMA: ARIMA Forecasting technique learn by example

Word “ARIMA” in Tamil language the means Lion.

Everybody is curious and anxious enough to know what the future holds? It’s always exciting to know about it. Though there are various forecasting models available in this post we will look at ARIMA. Welcome to the world of Forecasting with ARIMA.

## What is ARIMA?

ARIMA is a forecasting technique. ARIMA– Auto Regressive Integrated Moving Average the key tool in Time Series Analysis. This link from Penn State University gives good introduction on the time series fundamentals.

## What is the purpose?

To Forecast. The book Forecasting: principles and practice gives a very good understanding to the whole subject. You can read it online.

## What kind of business problems it can solve?

To give examples the following are some of the use cases of ARIMA.

• Forecast revenue
• Forecast whether to buy a new asset or not
• Forecast of currency exchange rates
• Forecast consumption of energy or utilities

## What is mandate to get started?

1. It is very important to have clarity on what to forecast. Example if you want to forecast revenue whether it is for a product line, demography, etc., has to be analysed before venturing on to the actual task.
2. Period or the horizon in which the forecast is to be done is also crucial. Example: Monthly, Quarterly, Half-yearly etc.,

## What are the preferred pre-requisites on data for Time series forecasting?

### Updated after comment from tomdireill:

1. Data should be part of time series. That is data which is observed sequentially over time.
2. It can be seasonal. Means it should have highs and lows. As per the notes from Duke University it can be also applied on flat pattern less data too.
3. It should have trend of increasing or decreasing
4. outliers
can be handled as outlined here http://www.unc.edu/~jbhill/tsay.pdf

Ok, Now we got to understand what is essential to get started on forecasting, before we devolve lets work on the steps.

5 Steps towards forecasting:

In the next post we will take up an example and work on the above steps one by one. Keep waiting.

# Database synchronization needs of multi-location enterprises

Recently during my interaction with one of our colleagues there came a discussion about using the same database replicated or making available across multiple locations. In the advent of various connectivity options exists these days and when people are talking about cloud based apps and implementation why is this need. This post is a search of an answer for that.

Business needs for multi-location enterprise solutions:

1. Requires using one application across the enterprise to ensure data integrity and single version of truth.
2. Get to know the data of what’s happens in other locations or other manufacturing or outlets.
3. Helps to plan and react better based on the data insights available from other locations.
4. Process control and improvement across the enterprise with a single solution
5. Low training cost

Challenges in accomplishing these business needs:

1. Lack of connectivity or poor connectivity between the locations
2. Higher bandwidth costs or complex internet solutions required to support the enterprise needs
3. No control or process enablement in the locations or facilities
4. Enterprise applications does not support the scenarios of multi-location with better control on data and process
5. Processes and applications established at locations without understanding the impact of connectivity and process issues
6. Limited accountability and responsibility at the locations in comparison with corporate or head quarters

Solutions or options are available for us:

1. If we are very sure about the connectivity and availability we can adopt cloud based solution which resolves problems for once for all
2. When there is connectivity issues, we might need to resort to Database synchronization options which would be more feasible to manage enterprise applications
3. The key things to these kind of scenarios is to identify the following with respect to data:
1. Who is the data owner?
2. Who has to create it?
3. Where it has to be created?
4. Who is going to consume it?
5. Is it required real-time?
6. What controls to be established upon the data?

Related articles for more reading:

http://blogs.msdn.com/b/sync/archive/2009/12/14/how-to-synchronize-multiple-geographically-distributed-sql-server-databases-using-sql-azure-data-sync.aspx

http://www.comp.dit.ie/btierney/Oracle11gDoc/server.111/b28324/tdpii_repcont.htm