Skip to main content

FutureLearn Week 2: Post 3 of 4

Two of the biggest challenges of big data is Analysing and Visualising the data.
Firstly with analysing the data, the size of big data files can sometimes be substantial, there are many things that must be considered before downloading the data, for example the file size, how long the data file will take to download, will all of it be necessary or will part of the file suffice and is there enough storage space within the system itself. Visualisation is way to represent the data in a way that is easier to understand such as word clouds and things of the like. This will aid users in seeing the prominent and key terms from the analysis of the data sets.

The first step after downloading the data would be to quality check it to ensure that each field had the appropriate data types in each field and to ensure that the user understood the meaning of each field.
Keeping a copy of the original data would be essential as well as each documented version change for each stage of visualisation. This version documentation would allow the users to create working copies from the original as a key.
Databases, programs or data warehousing platforms can be used as alternatives to spreadsheets for analysing data sets.

Predictive analysis would be the most appropriate for of analysis in this particular case to analyse energy consumption over time in relation to historical data provided by the government, and the use of comparative analysis in relation to comfortable/affluent acorn group.

Metadata otherwise known as data that describes other data includes a vast range of information that can include who recorded the data, why the data was recorded, what units the data is in, or it could cover what copyright that applies to the use of the data. Metadata is a vital resource to data scientists who more often than not require more data to explore and compare with other data sets, often using data that other people have created or produced.

Comments

Popular posts from this blog

FutureLearn Week 2: Post 1 of 4

Open data has been increasing for some time now with data being made open on various sites globally. There are many advantages to having open data, these advantages include being able to share public data sets so that they can be compared. These open data sources can also be used for environmental purposes or even health issues. Disadvantages of open data would include the fact that the site providing the data would be inherently biased and formed in the opinion of the creator.

Post #1: Definition of Big Data

Big Data  is a term that is used to describe a massive volume of both structured and unstructured  data  that is so  large  it is difficult to process using traditional database and software techniques. In most enterprise scenarios the volume of  data  is too  big  or it moves too fast or it exceeds current processing capacity. Big Data  comes from text, audio, video, and images.  Big Data  is analysed by organisations and businesses for reasons like discovering patterns and trends related to human behaviour and our interaction with technology, which can then be used to make decisions that impact how we live,  work , and play. This Big Data can also  be analysed for insights that lead to better decisions and strategic business moves.

Post 11: (Question 7) Limitations of traditional data analysis

As with all things there will always come limitations to data analysis due to the fact that it is created by humans and is subsequently subject to human error. Some of the limitations that you may come across would be that the data may be incomplete, whether it be missing values, or lack of a section of necessary data. This could severely limit the data's usability. Survey data can also be scrutinised due to the human component. People do not always provide accurate information through surveys and many are likely to not answer truthfully. For example if a person were asked how much alcohol they consume within a week they are likely to say less than their actual intake.