For Business Analysts, QA-s and Ad-hoc Queries
Business analysts with no .NET knowledge will probably not be able to create and configure Entity Data Model on top of cubes, but they will be able to consume these cubes by using LINQPad, SDX (greatly simplified dialect of LINQ) and SSAS Entity Framework Provider.
SDX LINQ with LINQPad will allow a business analyst to query do what he/she could not query using end-user oriented pivot table like OLAP tools, e.g.:
- Query cube data using LINQ set operations like Except(), Intersect(), Union() and Distinct().
- Join cube data using alternative joins not defined in a cube.
- Conveniently filter cube data by multiple selected dimension members in case of large selections and dimensions.
- Use complex conditions with OR + AND.
- Use complex conditional, computational and navigational logic developed by professional .NET developers like entity methods, computed properties and navigational properties in his or her queries, or for query parameters validation. This scenario allows
to use such power options of .NET platform as object-oriented polymorphism and unit testing to cope with proliferation of duplicated conditionals, test case explosion and other complicated programming problems, which cannot be handled by tools and pure query
languages like SQL or MDX gracefully.
- Join and aggregate data from multiple cubes.
- Join and aggregate data from multiple (and potentially heterogeneous) data sources, including multidimensional cubes and SSAS Tabular, non-Microsoft cubes like Oracle Essbase, Hadoop + Hive, relational databases, NoSQL databases with LINQ support, flat
files like CSV, MS Excel spreadsheets, PowerPivot and OData Web-services. Business analyst would be able to filter and/or aggregate heterogeneous data server side, then convert it into in-memory array like structures with .ToArray() or .ToEnumerable(), and
then join and additionally process result subsets of data in memory.
When you already have your data in OLAP cube(s) (SSAS, Essbase, Palo, SAP NetWeaver BW, SAS OLAP Server and so on) -using relational DBMS for heavy read queries in this case likely does not make much sense.
When you would like to expose data in your existing OLAP cubes to external applications via RESTful OData services.
When you are developing read data heavy application(s) in .NET (e.g. reporting or data analysis), especially with a complicated conditional logic, in case if such an application is sufficiently complex (accesses dozens and hundreds of tables),
when neither denormalization nor columnstore index against a normalized RDBMS is an option (see below when denormalization and colunmstore index are not an option).
-Once you have your cubes designed, creating all various
permutations of reports
and queries takes very little time and effort, moreover end-users often can create reports themselves using Excel pivot tables and other and drag and drop tools and pivot grids, with no joins required.
LINQ to Entities and SSAS Entity Framework Provider come to rescue when you need:
So we suggest OLAP cubes for both end-users and .NET developers in those cases. Cubes are quite fast to design; even if they require some knowledge to support and troubleshoot cube processing issues, the investment is paid by the performance gain, and the SSAS
Entity Framework provider can avoid learning MDX, which would be a larger investment in comparison.
When your data volume is too large for RDBMS (even denormalized, even with columnstore index), and quick response time is highly desired to stay competitive.
When you have your existing data access layer written using LINQ to Entities, often query aggregated data and
speed of your read queries does not meet user expectations / requirements
due to your data volume and despite of DB optimization, or if such optimization would require a significant effort or constant maintenance (due to a fragile nature of denormalization
and dozens or hundreds of aggregation permutations).
When your application uses LINQ to Entities, data warehouse (denormalized read-only reporting DB) and data originated from a single operational source DB, but
between operational DB and data warehouse is complex and takes too much time / resources
due to denormalization, or does not fit its time window.
-OLAP technology would allow you to streamline and speedup your ETL by reducing time and complexity spent on denormalization related data processing. (Cube processing also takes some time, but you have fewer opportunities for performance mistakes, more options
to speed the processing up, and a decision automation in a form of an aggregation design wizard)
When your existing OData services do not cope with you data volume.
When Columnstore Index is not an Option
When your clients own MS SQL Server 2012 Standard Edition, or prior versions of MS SQL Server, or use SQL Servers from vendors other than Microsoft, or use NoSQL servers, and if your clients are reluctant to move to MS SQL Server 2012 Enterprise Edition.
When Columnstore Index does not perform fast enough without wide denormalization.
When Columnstore Index used on your data volume requires to put more RAM than you can afford.
When slow cold run is not acceptable.
When Denormalization is not a Good Option
If you are developing complex (dozens or hundreds of tables) data heavy applications, especially if you are going to deal primarily with aggregated data. In such a case you would pay a very high cost for violations of normal forms and Do not Repeat Yourself
Pre-aggregated tables would give you too many permutations to maintain and store - this problem is known as
Manually denormalizing dozens and hundreds of tables leads to fragile solutions and multiple DB schemas to maintain and comprehend.
SSAS solves manual denormalization and database explosion problems by using proprietary pre-joined data formats, compression and automation - by providing a usage driven aggregation design wizard for pre-aggregated data structures.