Big Data Getting Attention in C-Suite

Analytics, Big Data, Data No Comments

According to a recent survey of executives at Fortune 1000 companies and large government agencies, the C-suite has high hopes for the value that analytics on Big Data promises, but is this just a wild pipe dream?

The survey revealed that eighty-five percent of respondents expected to gain substantial business and IT benefits from Big Data initiatives, with the main expected benefits being ‘fact-based decision making’ and ‘Customer experience’. Sound familiar so far?

Whilst it is encouraging to read companies are raising their hopes of BI to have a “positive impact across multiple lines of business” the recorded constraints in Big Data Analytics are the same for any BI implementation over the last 10 years, namely: Read the rest…

Big Data and BI

Cloud BI, Data No Comments

Just returned from an Altis breakfast briefing where William McKnight was talking about technology supporting BIG data. Although I am a strategic performance consultant, I find it very important to understand both the technology options and the approaches used by IT to deliver to the business intelligence requirements of the business. As data is the foundation of performance management, every time there is a big change in data, there is a big change in both the business and technical requirements.

  • On the business side – if your competitors are accessing the insight from Big Data, you need to stay on the same playing field if you want to remain competitive in the same game.
  • On the technology sideRead the rest…

Choosing the Right Format to Display Your KPI

Dashboards, Data No Comments

In determining the best format for displaying a KPI on a dashboard you need to understand your data – its format and the profile of performance.

There are three types of data display:

  1. Raw – is purely a snapshot of what has happened in the past reporting period. This is suited to performance that is chunky [occurs without predictable frequency] and has high variability in data value. This KPI type is typically presented using bar graphs.
  2. Change – Often we are not interested in the actual value; we only want to know the percentage change. This might be an indication of growth rate. Using an above:below graph, with the horizontal axis set at 0; positive values above the line, and negative change values below.
  3. Trend – the most telling type of KPI graph is the line graph. This shows the trend of performance and is a greater indicator of whether action is needed than either raw or change representations.
Whilst I personally prefer to have all data in a trending style – this is often not appropriate or possible, based on the format of the data and the profile of the performance.

Virtualized Data and Automated Discovery

BI Infrastructure, BI Strategy, Data No Comments

In an IT world that is rapidly becoming virtualized at the hardware and software levels it is not too much of a stretch to envision virtualization at the data level – easy to dream about, not so difficult to create, or is it?

As businesses continue to struggle to capture, clean and transform their data into a format best suited to BI tools, the adoption of BI in critical decision making is stalled.

BI visualization tools are being increasingly integrated directly to applications, relational databases and cubes, using web services and SOA, with innovations such as columnar databases are promising to overcome the format and power constraints that are holding BI adoption at sub par levels.

Virtualized data would abstract the data from its source silo structure, and instead present as a consumable entity regardless of ETTL processes it may have to pass through to become usable to the end BI tool. This abstraction supports the concept of automated discovery, where data from any source, in any format is consumable by BI applications.

With over 80 percent of information relevant to daily business decisions now unstructured, such advances in data management innovation are critical to overcoming current constraints. Omniture, Web analytics vendor are about to release a product to monitor API traffic on the Web, and a lot of keyword tracking to measure application traffic and consumption patterns. This would, for example, allow online retailers determine the best page layouts to sell more products. This comparative intelligence can be fed into BI analytic or visualization tools to add to customer profiling data.

SAP’s Business Objects Explorer also tracks end user activity across related topics at one location and aggregates it with related data feeds. Explorer is data feed agnostic – leaning towards the type of abstraction that defines virtualization. Information may be drawn from text, voice, video, transaction data or anything else as a mashup of structured and unstructured content with mapping providing contextual relevance.

No amount of ‘intuitive interface’ design will match human capability, but a lot can happen behind the scenes that surpasses the ability of humans to correlate relationships between massive volumes of data in very short time intervals. This contextual mapping has advanced far beyond the traditional integration of data warehousing and is heralding another major leap in BI infrastructure capability.

Good Things Happening With Bashups [BI Mash Ups]

BI Solutions, Data No Comments

Mashups are common in many areas today, and are now invading the BI space.

Mashups are one way of resolving the ever present problem of data isolation. Mashups help provide context to data, adding significantly to the value of data.

From a technical perspective, the integration of BI and the contextual data is not overly complicated, the more important consideration is data quality. This is more a business issue around agreement on common definitions both for data and performance indicators.

Typical examples of Bashups include:

  • BI analytics with GIS information – for example, a retailer can analyze market data within a certain radius of a potential store.
  • Integrated search and in-memory analytics – will make it easier to index large amounts of structured data and build high-performance analytical applications against increasingly large data sets.
  • Insurance Services – Linking health claims data and wellness program data – enabling employers to analyze the cost effectiveness of different programs and benefits.
  • Financial services  – sales CRM data with product data – enabling customised sales proposals for specific customers in just a few minutes, then tracking of the proposal through its lifecycle of authorization and extension of product trials.
  • Software Sales – In one instance, just tracking software trials enabled one company to reduce the trial to customer conversion time from an average of 75 days to between 36 and 40 days.

This level of data analytics sophistication is propelling businesses to better enforce standardisation, so that future bashups can be created.

Mash Up Technology includes:

  • JustSystems
  • Serena Software’s MashUp Composer

And to add to the presentation of these more advanced BI solutions, gaming interfaces are now being used to graphically improve the presentation of information

Real Learnings From Real BI Implementations

BI Strategy, Data, IT Strategy No Comments

There is a lot written on best in class practices for deploying BI in operational business intelligence projects. For some real world examples with more specifics on ‘How To’, this post on CIO.com India offers insights from several companies.

Two key requirements in most projects focus on moving data closer to the business and monitoring all data though a single system. However, in reality most projects have found that processes don’t fit neatly into single systems.

Data is still consolidated into a single data warehouse where formats can be transformed and analytic rules applied. For example, time-critical information such as production data is gathered more frequently and often supplemented with other types of operational data, however rather than using the data warehouse as the platform for real-time data analysis discrete software tools are used to analyze transactional data.

Attempting real time analyses typically requires a big infrastructure upgrade that may not be economically justified in many companies. Not all processes — or even most — need to be monitored in real-time. Latency schedules should not be driven from the data availability end, but rather from the information consumption perspective. Most businesses struggle to conume information on more than a daily basis. Unless the data relates to mission critical transactions, real time is not required.

Selecting the right kind of data for real-time analysis, is based around what information provides insight into completion rate baselines. Data providing insight into how customers are using products or how to optimize business processes is not always required in real time.

Have BI Vendors Mastered Text Mining?

BI Solutions, Data 1 Comment

To date the focus in BI has been on structured data. However, large volumes of information is contained within unstructured documents such as blogs, wikis, news feeds, transcripts, pdf’s, email, word documents, and multi media. In fact, one report I read suggested that 85 percent of that company’s data is unstructured data. Whilst this may be a little on the high side for many businesses, it does herald the increasing volumes of data that are not being captured in current BI tools.

In response to this trend, BI vendors are gearing up efforts in text mining capabilities. Both Google and Microsoft have published Enterprise Search solutions that parse unstructured data sources throughout the enterprise to provide results similar to those of Internet Search Engine results. However, this fails to provide the deeper answers that BI tools have become synomonous with. In other words, search can tell you what is happening, but not why it is happening.

Text mining takes unstructured data to this next level, by transforming text into a structured format. It automatically classifies documents and identifies key relationships that provide insight into the WHY. Such relationships are not possible with standard Enterprise Search.

Text Mining is much more than searching and filtering to find the right document – it needs to be able to extract key data and insights from such documents, and connect this with processes and tools used for mining structured data. To date, BI vendors have yet to achieve this capability. However, with the speed of progress being made in data mining, we can hold out hope that this will not be too distant.

Loyalty Programs – Should Retailers Re-Sign or Resign?

BI Theory, Data 1 Comment

Retail and consumer companies are reviewing customer loyalty programmes as economic conditions threaten profit margins. Analysis as to whether the retailers are getting a good return on the schemes is being undertaken before retailers commit to signing up for a further agreement term with the program provider.

Retailers who subscribe to programs that are up for renegotiation are reevaluating the merits of such schemes.

Overall, it appears that such loyalty program are still performing but analysis has found that some customers who are not profitable are being rewarded under some schemes and that some retailers are carrying a high cost of retaining. Loyalty programs need to be managed correctly to gain the full return of their value.

Some programs – such as the petrol discount vouchers issued by supermarkets are not strictly loyalty programmes but a short-term inducement to customers.

The decision for retailers is in either accepting loyalty programmes as a business overhead or dropping them at the risk of losing business.

In spite of direct marketing being somewhate expensive, data extracted from loyalty programs allows client microsegments to be identified faster and cheaper.

DW Growth Reflects Rapid Growth In Data Collection

Data 1 Comment

I was just browsing through a Teradata magazine and came across this snippet, which reminded me how fast data growth is, and the need for solutions to keep up with that growth:

1992 — First system over 1TB went live

1996 — Demonstrated world’s largest data warehouse with 11TB of data

1999 — World’s largest data warehouse in production with 130TB of user data

2004 — Largest centralized data warehouse housing 423TB of data

2006 — Teradata certified up to 4 petabytes
 
That’s one almighty data warehouse!