Datagaps specializes in testing automation of two areas. 1) Data integration patterns such as Data Warehousing, Migration etc. and 2) Business Intelligence Platforms such as Oracle Business Intelligence, Tableau, Business Objects. Datagaps was selected for Top 100 Most Promising Big Data Companies by the CIO Review magazine for its BigData 100 special edition in 2014 and Datagaps was awarded US patent US20120290527 - Data extraction and testing method and system for its ELV Architecture in the year 2012.
Don't wanna be here? Send us removal request.
Text
The role of BI Validator in major BI upgrades
Every few years the BI vendors have a major release that forces the companies using these products to uptake a complex BI upgrade project. For example OBIEE 11g to 12c upgrade or Business Objects 4.1 to 4.2 upgrade. Testing is an important aspect of these upgrade projects. The blog explains how customers can leverage BI Validator to test the upgrades..
1. Upgrade Test Plan: This can be used to compare the reports across the pre and post upgrade environments and ensure that they are as expected. If BI Validator identifies any differences in the data sets, it marks the test plans with a "Warning" status.
Note: Since the UI may look very different (OBIEE 11g & 12c), compare the data in the reports as opposed to the pdf option.
2. As part of the above test plan, BI Validator can also ensure that the reports are found in the post-upgrade environment and that there is no degradation in performance.
3. Stress Test Plan can simulate concurrent loads (e.g 10, 100,500..concurrent users) on the reports and dashboards and ensure that there is no degradation when more users access the BI system.
All the above can be achieved with zero programming and just a couple of clicks. Try BI Validator now in few minutes to know how it can help you during the upgrades!
0 notes
Text
Testing Automation of OBIEE Subject Areas
Subject Areas in Oracle OBI are great for business users. They hide the complexity involved in warehousing projects and present an easy to use mechanism to create adhoc analysis based on the business user's need.
From an IT perspective, testing subject areas is fairly complex. Every Subject Area is really a grouping of Dimension Folders (with attributes) and Measures also known as Facts. The dimension folders may have 100s of attributes and connected to fact tables via foreign keys. Typically, this is referred to as the Star Schema in the industry. Engineers, Business Analysts, Quality Assurance teams struggle to ensure that business users do not encounter unpleasant surprises in the form SQL Errors when they try to create and execute analysis from the subject areas.
To create a new analysis, business users are empowered to pick any subject area, select random combination of dimension attributes and facts as part of the analysis.
Now, here is the challenge; assume that there 4 dimensions:
Account
Leads
Opportunities
Time
Also, assume each dimension has 20 attributes and that "Fact" has referential integrity with all the above dimensions. Now, to address a new Service Request from business, IT has to tweak a few joins. How can subject areas be tested so that enhancements/bug fixes/changes do not cause any regression?
From our experience, some of the common issues encountered while creating an analysis :
•A specific combination of attributes and facts may result in ODBC errors
•Depending on how the RPD modeled, the physical query may contain 'CAST AS NULL' when the BI Server is not able to determine the right way to join tables
to address the above issues/risks, we have introduced a new test plan "Subject Area Test Plan" in BI Validator. It drastically simplifies the testing of Subject Areas by automatically generating logical queries using various combinations of dimension attributes and facts as shown below:
•Dimension Attributes Only - Create one logical query per dimension folder by selecting all the attributes in that folder.
•Fact Measures Only - Create one logical query per fact folder by selecting all the measures in that folder.
•Single dimension to Single Fact Only - Pick all the dimension attributes for a dimension folder in combination with one fact or measure at a time. This category should produce the most number of logical queries.
•Single dimension to Multiple Fact Table - Pick all the dimension attributes for a dimension folder in combination with all the facts in a fact folder
•Multiple dimension to Single Fact Table - Pick one attribute from each of the dimension folders and one fact at a time.
•Multiple dimension to Multiple Fact Table - Pick one attribute from each of the dimension folders in combination with all the facts in a fact folder
While these combinations of dimension attributes and facts do not cover all possible combinations, they represent a basic set of tests that can be used to validate the subject area with minimal manual effort.
We developed this test plan based on decades of experience in the BI space and are really excited about it..
0 notes
Text
Data Testing Automation for Salesforce using ETL Validator
Over the last few years, the Salesforce platform has become an incredible force in the market for various reasons. Of course, the most obvious use case is for the CRM capabilities. In addition, many organizations have started using the power of the force platform to build and deploy custom applications in the cloud at an incredibly fast pace.
While this journey is truly exciting, there will always be a burning underlying need to be able to test the data and ensure that it is always as expected. In this blog, I just thought of highlighting a few use cases and how ETL Validator can help you in addressing those.
Use Case 1: Comparing the data between a Salesforce Org and an OnPremise database.
Consider a simple order capture application that has been moved from an OnPremise to Salesforce. Also assume that the application has few basic objects; Account, Products, Order and Order Line Item tables. Now, after you move the data, few questions emerge:
Did I get ALL the accounts from On Premise application to the Salesforce app?
What is the best way for me to compare the counts?
Are all the records between Salesforce and On Premise system matching from a data integration perspective?
Is referential integrity between accounts, orders and order line items properly maintained in Salesforce?
These are simple questions but are extremely important to have the confidence on the data migration process to the Salesforce platform. Using ETL Validator, you can easily create connections between Salesforce and your On Premise Database and execute the above tests in no time.
Use Case 2: Baselining the Product Catalog
Lets’s say there are 1000 products in the catalog and it is important to ensure that that the metadata of this product catalog does not get accidentally modified. How would you do that?
In ETL Validator, you can baseline a table and then run tests on an ongoing basis to ensure that the data accidentally does not get modified. If it does, then the platform can send out notifications with the records that do not have data as expected.
Use Case 3: Metadata Comparison
Over a period of time, it is important to understand the changes to profiles, roles, access privileges etc in the Salesforce platform so that only the expected changes are going in with each internal release and nothing else is slipping through the cracks.
Using ETL Validator, you can take a baseline of the object metadata and then compare the same over time. If there are any differences, similar to the above use case, ETL Validator can send out notifications and alert administrators of unexpected changes.
0 notes