#edit sql tables in bulk
Explore tagged Tumblr posts
codician · 9 months ago
Video
youtube
SQL Server Table Editor / Management Tool | Visual Studio
0 notes
tecnology456 · 4 years ago
Text
Oracle Change Data Capture – Development and Benefits
Oracle Change Data Capture (CDC) is a technology that is widely used today for multiple activities that are critical in today’s organizations. CDC is optimized for detecting and capturing insertions, updates, and deletions that are applied to tables in an Oracle database. The technology is also an essential feature of any Oracle replication solution as CDC classifies and records changed data in a relational format that is suitable for use in EAI, ETL, and other applications.
Businesses using Oracle databases from Oracle CDC have several advantages. Data warehousing costs are lower and real-time data integration done by extracting and loading data into data warehouses and other data repositories incrementally and in real-time are allowed. When only the data that has changed after the last extraction is transferred, Oracle CDC does away with the need for bulk data updating, batch windows, and complete data refreshes that can disrupt users and businesses.
Tumblr media
Development of Log-Based Oracle CDC
The Oracle 9i version of Oracle was released in 2001 with a new feature. Users of the database had access to a built-in mechanism for tracking and storing changes in real-time as they happened – a native Oracle CDC feature. However, database administrators faced a major disadvantage as it depended on placing triggers on tables in the source database, thereby increasing processing overheads.
This feature was fine-tuned in the Oracle 10g model. A new Oracle CDC technology was introduced that used the redo logs of a database to capture and transmit data without using the previous invasive database triggers. An Oracle replication tool called Oracle Streams was used for this activity.
This new version of Oracle CDC was primarily log-based form of change data capture that was lightweight and did not need the structure of the source table to be changed. Even though this latest model of CDC became quite popular, Oracle decided to withdraw support to Oracle CDC with the release of Oracle 12c. Users were encouraged to switch to the higher-priced Oracle Golden Gate, an Oracle replication software solution.  
Oracle CDC Modes
The Oracle Data Integrator supports two journalizing modes.
The first is the Synchronous Mode. Here triggers are placed in tables in the source database and all changed data at source is immediately captured. Each SQL statement in the table completes a Data Manipulation Language (DML) activity which is insert, update, or delete. The changed data is captured as an integral component of the transaction which changed the data at the source. This mode of Oracle CDC is available in the Oracle Standard and the Oracle Enterprise Editions.
The next is the Asynchronous Mode. Here, data is sent to redo log files where the changed data is captured after a SQL statement goes through a DML activity. Since the modified data is not captured as a part of the transactions that made changes to the source table, this mode does not have any impact on the transaction.
2 notes · View notes
codeavailfan · 5 years ago
Text
SAS vs Excel Which One Is Better For Data Analytics
SAS vs Excel
In this new age of technology, new inventions that are all made by scientists or do business easily are changing every day. SAS vs EXCEL is one of the most common problems that everyone who uses statistics generates.
EXCEL is one of the ways in which software can display data, and likewise, SAS is part of the statistics used to display the data.
Today, innovations are introduced into all the devices, technologies, and new software that are developed every day, making it easier or most important, to facilitate our lives. In this blog, you will discuss or know about the most important and most common tool for data analysis, SAS VS EXCEL.
In the data analysis area, data analysis operations cannot follow a specific tool. Data or actions can be a Stick in Excel in Sas, depending on your needs and needs. Let's look at EXCEL VS SAS.
Both are part of statistics, and analysis in statistical data is the central part of the statistics. First of all, it's about data analysis.
Definition of Data Analytics?
Data analysis is a study of raw analysis to draw conclusions about such data. Numerous information systems and procedures have been performed in mechanical procedures and coding that act on raw data for human use.
Data analysis methods can detect patterns and measurements that will be lost in large amounts of data. Then you can use this data in advanced procedures to disseminate the company's overall productivity or framework.
Data analytics projects enable organizations to increase revenue, improve operational expertise, improve promotional efforts and support customers, respond more quickly to business model development, and gain an edge over their competitors.
Depending on the specific application, the information you are performing may include new information that is prepared for actual records or ongoing investigations. It also mixes internal frameworks with external sources of information.
 Definition of SAS?
SAS is the statistical software used primarily for information consulting, testing, and business knowledge. SAS is a statistical analysis system, and this language is encoded in the C. SAS language used in most working frameworks.
It can be used as a programming language and graphical interface. Developed by Anthony James Barr, you can analyze data from spreadsheets and databases. The results can be provided as tables, charts, and records. SAS uses it to view, retrieve, and test real data, and also to run SQL queries.
The reason for the existence of a statistical analysis system is to use information from many sources. It is used to collect data from other sources and perform real-world types of tests to obtain real results.
Predictors can use programming to view real-world tests, but they can also use the SAS programming language.
 Definition of Excel?
Excel is a microsoft-manufactured program that is a section of the Microsoft Office Efficiency Programming Suite created by Microsoft. It was originally coded as an Odyssey during its development at Microsoft and was first released from 1985 to 1985.
Ideal for creating and editing spreadsheets stored as .xls or .xlsx document extensions. General Excel uses cell-based predictions, rotating tables, and other graphics.
For example, you can use spreadsheets in Excel to create monthly spendplans, track operating costs, or sort and compile a lot of information.
Unlike word processors such as Microsoft Word, Excel reports contain sections of individual cells and columns of information. Each of these cells may contain messages or numerical characteristics that can be determined using the recipe.
Applications of SAS VS EXCEL
Multivariate analysis
Think of someone who wants to buy stocks in bulk. Then a person has the value of other factors such as value, quantity, quality, etc. The same applies to exams that do not have any changes. Identify and study multiple measurable result factors at the same time.
It uses multiple surveys to assess the impact of various factors on individual factors and includes factor survey studies, adverse volume tests, and numerous recurrences.
Business Intelligence
This leads to the technology and policies applied by the company to analyze business data. Provides penetration in relation to the current solid job prospects.
Reviewing the information will help you on a higher board with dynamic differences. These steps include information retrieval, information extraction, process extraction, complex opportunity preparation, and comparative analysis.
 Predictive analytics
As the name suggests, it actually uses data that can be used in the future. To draw clothes, use some measurable methods. For example, an organization's article negotiation pattern has been consistent for many years. Therefore, we recommend articles that do not change. However, article B, which changes interests every month, examines all variables that cause diversity and hide the performance of the content, the customer perspective, and so on. Here, the estimated model searches for designs found in the actual information to identify cases.
Advantages of Excel vs sas
Easy and effective comparisons
Excel is a powerful data analysis tool that explores a large amount of data to detect the courses and patterns that will influence the decision. It allows you to collect data from a large amount of data in a simple way.
Work together
 By using Excel, you can work on spreadsheets with another user at the same time. The ability to work together increases the ability to optimize good ways and enables brainstorming. The main advantage is that your Excel worksheet is Web-based, and you can collaborate anywhere.
Large amounts of data
 Recently, updates to Ms. Excel's version improve your experience of testing large amounts of data. With powerful filtering, sorting, and search tools, you can quickly and quickly narrow down the criteria to help you make decisions.
Microsoft Excel Mobile and iPad apps
  By developing an App for Android and iOS, it's easy to bring spreadsheets to a customer or business meeting without having to carry your laptop. The app makes it easy to work anywhere with your smartphone. you can edit or make changes to your phones and tablets immediately.
SAS is the simplest programming language so that anyone can easily learn SAS
This is the programming language then for encoding manages large databases.
while debugging encoding is very simple. error messages are more understandable.
The algorithm is first tested and then added or implemented to SAS.
SAS has dedicated customer support
The main advantage is that it provides full security to your data.
Features of SAS VS EXCEL
SAS
Data encryption algorithm
Management
SAS studio
Report output Format
Strong Data Analytics Abilities
Support of Various Types Of Data Format
Excel
Data Filtering
Find and replace command
Data sorting
Automatically edits the results
Password protection
Option for header and footer
Conclusion
By this blog you can easily choose the software or programming language between both SAS VS EXCEL. By this blog you will learn the advantages, definition and some features and applications of  the EXCEL VS SAS which is provided by the experts which are always there to help you in every problem. Similarly make use of our best services and increase your knowledge with us.
As a result, if you want Statistics Assignment Help and Statistics Homework Help or Excel assignment help. Our experts are available to provide you SAS Assignment Help and Do my Statistics Assignment  within a given deadline.
1 note · View note
parkhagfaipo · 3 years ago
Text
Windows server 2008 r2 standard 32 bit product key 無料ダウンロード.Windows Server 2012、2012 R2、および SQL Server 2012 のサポート終了に備える
Windows server 2008 r2 standard 32 bit product key 無料ダウンロード.Results for "windows server 2008 r2 download iso full version"
Tumblr media
                                                                          Join or Sign In.SQL Server および Windows Server / R2 の拡張セキュリティ更新プログラム | マイクロソフト
    Oct 10,  · Update – Product Keys. We have added the below keys as they include the versions and also a few alternatives for R2. Windows Server Standard. TM24T-X9RMF-VWXK6-X8JC9-BFGM2. Windows Server Enterprise. YQGMW-MPWTJKDKM3W-X4Q6V. Windows Server Datacenter Oct 25,  · This morning Microsoft released Service Pack 2 (SP2) for Windows Vista and Windows Server Here are the download links to the various flavors. Be sure to Reviews: 1 a) オンプレミスの Windows Server / R2 で実行中の SQL Server / R2: MSRC で脆弱性が検出され "緊急" と評価された場合、拡張セキュリティ更新プログラムを購入されたお客様は、適格なインスタンスを登録して Azure ポータルから更新プログラムを    
Windows server 2008 r2 standard 32 bit product key 無料ダウンロード.Windows Server R2 Download Iso Full Version - CNET Download
Oct 10,  · Update – Product Keys. We have added the below keys as they include the versions and also a few alternatives for R2. Windows Server Standard. TM24T-X9RMF-VWXK6-X8JC9-BFGM2. Windows Server Enterprise. YQGMW-MPWTJKDKM3W-X4Q6V. Windows Server Datacenter Oct 25,  · This morning Microsoft released Service Pack 2 (SP2) for Windows Vista and Windows Server Here are the download links to the various flavors. Be sure to Reviews: 1 Windows Identity Foundation for Windows 7 and Windows Server R2 (bit) Free Simplify user access for developers with pre-built security logic and tools         
 In need of a list of installation keys for Windows Server and Windows Server 7? enter a product key like you had to in previous Microsoft operating systems. Windows Server R2 Standard 32 Bit Product Key windows server standard product key, windows server r2 standard product key Windows Server Activation Key Did you upgrade your domain server, not yet or waiting for server activation key.
I am sharing you Window server essentials key, activate your key bulk discount , window 7 ultimate 32 bit product key , windows Windows Server R2 Standard. download windows server r2 enterprise edition iso 32 bit,purchase window exchange server standard edition windows 7 ultimate os product key server enterprise 32 bit iso,windows 7 ultimate working product key Product Activation is controlled through the Activation tab of the MKS Toolkit control panel applet.
Windows Server R2 bit and x to contact MKS Sales to have these serial numbers bundled under one serial number or Server Datacenter. Product Key: VRDD2-NVGDP-K7QGBR4-TVFHB Product Key: NKB3R-R2F8T-3XCDP-7Q2KW-XWYQ2. Windows Product Key Windows 7 Ultimate 32 Bit. Specify the KMS Client Setup Key in the sysprep answer file. Here's a list of keys for WS R2: Windows Server R2 HPC Edition sql server r2 standard serial key,key windows vista ultimate 32 bit. Use this table to find the correct Generic Volume License Key GVLK to use our Key Client, Windows 8 Enterprise, 32JNW-9KQP47T8-D8GGY-CWCK Server, Windows Server R2 Standard, D2N9P-3P6XR39C-7RTCD-MDVJX Windows Server R2 for Itanium-based Systems..
Windows Server Datacenter without Hyper-V with Service Pack Server Standard R2 with SP1 - Windows Server Standard R2 with Pack 2 ISO - Windows XP Professional with Service Pack 2 32Bit KMS Keys. This is just a copy and paste job right now. I'll clean this up later. Office Professional Plus KMS Client Setup Key. Windows 10 Professional Windows Server R2 Server Standard Windows Vista, 7, 8, 8.
If there's no key listed for your edition of Windows, then it doesn't support volume Windows 8 Enterprise, 32JNW-9KQP47T8-D8GGY-CWCK7. Explore Things Designers Groups Customizable Things.
Upload a Thing! Customize a Thing. View Profile Messages My Designs My Collections My Likes My Groups Account Settings Log Out. About Thingiverse � �. Patch Server R2 Standard 32 License Free. by stephliperhe. Download All Files. Select a Collection. or create a new one below: Things to Make Create a new Collection. Save to Collection. Tip Designer. Share this thing. Send to Thingiverse user.
Remixed from: Select a Collection. windows server r2 standard product key activation Apr 21, Windows server r2 standard 32 bit product key. Windows Server R2 Standard 32 Bit Product Key In need of a list of installation keys for Windows Server and Windows Server 7? Print Settings. windows server standard product key crack May 18, windows server standard evaluation product key May 19, windows server standard evaluation product key Jun 15, windows server standard evaluation product key crack Mar 21, windows server standard product key Dec 10, Back to Top.
0 notes
debtblog381 · 3 years ago
Text
Yoshimura Ems Software Free
Tumblr media
Yoshimura Ems Software
Yoshimura Ems Software Free Trial
Yoshimura Ems software, free downloads
Yoshimura Ems
Advertisement
AdventNet Web NMS Express Edition v.4A development-free network management framework for building custom EMS/ NMS applications. Networking equipment vendors and other management solution providers rely on AdventNet Web NMS for rapid management application development and deployment.
WebNMS Framework Trial Edition v.5WebNMS Framework is the industry-leading network management framework for building custom Element Management System (EMS) and Network Management System (NMS) applications.WebNMS Framework is a scalable, application-centric platform that makes ..
EMS Bulk Email Sender v.3.6.0.8EMS Bulk Email Sender is super bulk email software, a mass email marketing program and a bulk emailing sender. You can launch your email marketing campaign in minutes: create a newsletter, select a mail list, send newsletter and then analyze report.
Continuing Ed Tracker- Michigan EMS v.1Features Continuing Ed Tracker has now been customized to work specifically to comply with Michigan EMS continuing education requirements. Simply enter course information, select which of your staff attended the course, and let the software do the ..
EMS SQL Query 2011 for SQL Server v.3.3.0.1EMS SQL Query for SQL Server is an easy to use software that enables you to quickly build SQL queries to Microsoft SQL databases.Visual building as well as direct editing of a query text is available. User-friendly graphical interface allows you ..
EMS SQL Query 2011 for PostgreSQL v.3.3.0.1EMS SQL Query for PostgreSQL is an useful software that enables you to quickly build SQL queries to PostgreSQL databases. Visual building as well as direct editing of a query text is available. User-friendly graphical interface allows you to ..
EMS MySQL Manager Professional for Windows v.2.8.0.1EMS MySQL Manager is a high-performance tool for administering MySQL server. It provides an easy-to-use graphical interface for maintaining databases and database objects, managing table data, building SQL queries, managing users and many more..
EMS MySQL Manager Lite v.3.3EMS MySQL Manager Lite is an excellent freeware graphical tool for MySQL Server administration. It has minimal required set of instruments for those users who are new to MySQL server and need only its basic functionality.
EMS SQL Manager 2005 Lite for SQL Server v.2.3EMS SQL Manager Lite for SQL Server is a light and easy-to-use freeware graphical tool for MS SQL/MSDE administration. It has minimal required set of instruments for those users who are new to MS SQL server and need only it's basic functionality.
Tumblr media
EMS SQL Management Studio for PostgreSQL v.1.0EMS SQL Management Studio for PostgreSQL is a complete solution for database administration and development. SQL Studio unites the must-have tools in one powerful and easy-to-use environment that will make you more productive than ever before!
EMS SQL Management Studio for SQL Server v.1.0EMS SQL Management Studio is a complete solution for database administration and development. SQL Studio unites the must-have tools in one powerful and easy-to-use environment that will make you more productive than ever before!
EMS SQL Manager 2007 Lite for MySQL v.4.0EMS SQL Manager Lite for MySQL is an excellent freeware graphical tool for MySQL Server administration. It has minimal required set of instruments for those users who are new to MySQL server and need only its basic functionality.
EMS SQL Manager 2007 for MySQL v.4.0EMS SQL Manager for MySQL is a powerful tool for MySQL Server administration and development. Its state-of-the-art graphical interface and a lot of features will make your work with any MySQL server versions as easy as it can be!
EMS SQL Manager 2007 Lite for PostgreSQL v.4.0EMS SQL Manager Lite for PostgreSQL is a light and easy-to-use freeware graphical tool for PostgreSQL administration. It has minimal required set of instruments for those users who are new to PostgreSQL server and need only its basic functionality.
EMS SQL Manager 2007 for PostgreSQL v.4.3EMS SQL Manager for PostgreSQL is a powerful graphical tool for PostgreSQL DB Server administration and development.
EMS SQL Query 2007 for DB2 v.3.0EMS SQL Query for DB2 is a utility that lets you quickly and simply build SQL queries to IBM DB2 databases. Visual building as well as direct editing of a query text is available.
EMS SQL Query 2007 for MySQL v.3.0EMS SQL Query for MySQL is an utility that lets you quickly and simply build SQL queries to MySQL databases. Visual building as well as direct editing of a query text is available.
EMS DB Comparer 2007 for Oracle v.3.0EMS DB Comparer for Oracle is a powerful utility for comparing InterBase/Firebird databases and discovering differences in their structures.
EMS Data Export 2007 for Oracle v.3.0EMS Data Export for Oracle is a powerful program to export your data quickly from Oracle databases to any of 19 available formats, including MS Access, MS Excel, MS Word, RTF, HTML, XML, PDF, TXT, CSV, DBF and others.
EMS Data Export 2007 for MySQL v.3.0EMS Data Export 2005 for MySQL is a powerful program to export your data quickly from MySQL databases to most popular formats, including MS Access, MS Excel, MS Word, RTF, HTML, XML, PDF, TXT, CSV, DBF and others.
Yoshimura Ems Software
Tumblr media Tumblr media
Ems Software software by TitlePopularityFreewareLinuxMac
PIM2 Software works on PCs running any version of Windows. Release some tension swv rar. Jan 10, 2012 - Yoshimura EMS PIM2 EFI Fuel Controller KAWASAKI KFX 450R. There are dozens of maps to download for FREE on Yoshimura's website. PIM2 Software works on PC's (Any Version of Windows) and Intel-Based MAC's. Yoshimura EMS PIM2 Unit. Check out our race proven PIM units. Easy to read display - intuitive and logical.
Yoshimura Ems Software Free Trial
Tumblr media
Yoshimura Ems software, free downloads
Today's Top Ten Downloads for Ems Software
Yoshimura Ems
Vemail Voice Email Software for Windows Vemail is software that lets you record and send voice
KnowMetrics (online testing software) KnowMetrics ( online testing software ) is the ideal tool
Cyber Internet Cafe Software - Internet Caffe Complete solution for timing and billing management control.
CYBER INTERNET CAFE SOFTWARE MyCafeCup Internet cafe software and Cyber Cafe Software from
Software Help Creator Download Software help file tool online to create
Amara Photo Slideshow Software Amara Flash Photo Slideshow software is a Flash album
IMagic Inventory Software Managing stock has never been as easy! iMagic Inventory lets
XLabel - Label Printing Software by Wolf XLabel - High end label design and printing software ,
Quick-Recovery-for-FAT-Data-Recovery-Software Unistal offer you the best solution for Your Data Recovery,
Print Helper: Batch and Automatic Printing PrintHelper is the ideal solution for organizations that
Visit HotFiles@Winsite for more of the top downloads here at WinSite!
Tumblr media
0 notes
globalmediacampaign · 5 years ago
Text
Migrating Oracle databases with near-zero downtime using AWS DMS
Do you have critical Oracle OLTP databases in your organization that can’t afford downtime? Do you want to migrate your Oracle databases to AWS with minimal or no downtime? In today’s fast-paced world with 24/7 application and database availability, some of your applications may not be able to afford significant downtime while migrating on-premises databases to the cloud. This post discusses a solution for migrating your on-premises Oracle databases to Amazon Relational Database Service (RDS) for Oracle using AWS Database Migration Service (AWS DMS) and its change data capture (CDC) feature to minimize downtime. Overview of AWS DMS AWS DMS is a cloud service that helps you to migrate databases to AWS. AWS DMS can migrate relational databases, data warehouses, NoSQL databases, and other types of data stores into the AWS Cloud. AWS DMS supports homogeneous and heterogenous migrations between different database platforms. You can perform one-time migrations and also replicate ongoing changes to keep source and target databases in sync. To use AWS DMS, at least one database end should be in AWS, either the source or target database. When you replicate data changes only using AWS DMS, you must specify a time or system change number (SCN) from which AWS DMS begins to read changes from the database logs. It’s important to keep these logs available on the server for a period of time to make sure that AWS DMS has access to these changes. Migrating LOBs If your source database has large binary objects (LOBs) and you have to migrate them over to the target database, AWS DMS offers the following options: Full LOB mode – AWS DMS migrates all the LOBs from the source to the target database regardless of their size. Though the migration is slower, the advantage is that data isn’t truncated. For better performance, you should create a separate task on the new replication instance to migrate the tables that have LOBs larger than a few megabytes. Limited LOB mode – You specify the maximum size of LOB column data, which allows AWS DMS to pre-allocate resources and apply the LOBs in bulk. If the size of the LOB columns exceeds the size that is specified in the task, AWS DMS truncates the data and sends warnings to the AWS DMS log file. You can improve performance by using Limited LOB mode if your LOB data size is within the Limited LOB size. Inline LOB mode – You can migrate LOBs without truncating the data or slowing the performance of your task by replicating both small and large LOBs. First, specify a value for the InlineLobMaxSize parameter, which is available only when Full LOB mode is set to true. The AWS DMS task transfers the small LOBs inline, which is more efficient. Then, AWS DMS migrates the large LOBs by performing a lookup from the source table. However, Inline LOB mode only works during the full load phase. Solution overview This post uses an Amazon EC2 for Oracle DB instance as the source database assuming your on-premises database and the Amazon RDS for Oracle database as the target database. This post also uses Oracle Data Pump to export and import the data from the source Oracle database to the target Amazon RDS for Oracle database and uses AWS DMS to replicate the CDC changes from the source Oracle database to the Amazon RDS for Oracle database. This post assumes that you’ve already provisioned the Amazon RDS for Oracle database in your AWS Cloud environment as your target database. The following diagram illustrates the architecture of this solution. The solution includes the following steps: Provision an AWS DMS replication instance with the source and target endpoints Export the schema using Oracle Data Pump from the on-premises Oracle database Import the schema using Oracle Data Pump into the Amazon RDS for Oracle database Create an AWS DMS replication task using CDC to perform live replication Validate the database schema on the target Amazon RDS for Oracle database Prerequisites Based on the application, after you determine which Oracle database schema to migrate to the Amazon RDS for Oracle database, you must gather the few schema details before initiating the migration, such as the schema size, the total number of objects based on object types, and invalid objects. To use the AWS DMS CDC feature, enable database-level and table-level supplemental logging at the source Oracle database. After you complete the pre-requisites, you can provision the AWS DMS instances. Provisioning the AWS DMS instances Use the DMS_instance.yaml AWS CloudFormation template to provision the AWS DMS replication instance and its source and target endpoints. Complete the following steps: On the AWS Management Console, under Services, choose CloudFormation. Choose Create Stack. For Specify template, choose Upload a template file. Select Choose File. Choose the DMS_instance.yaml file. Choose Next. On the Specify stack details page, edit the predefined parameters as needed: For stack name, enter your stack name. Under AWS DMS Instance Parameters, enter the following parameters: DMSInstanceType – Choose the required instance for AWS DMS replication instance. DMSStorageSize – Enter the storage size for the AWS DMS instance. Under source Oracle database configuration, enter the following parameters: SourceOracleEndpointID – The source database server name for your Oracle database SourceOracleDatabaseName – The source database service name or SID as applicable SourceOracleUserName – The source database username. The default is system SourceOracleDBPassword – The source database username’s password SourceOracleDBPort – The source database port Under Target RDS for Oracle database configuration, enter the following parameters: TargetRDSOracleEndpointID – The target RDS database endpoint TargetRDSOracleDatabaseName – The target RDS database name TargetRSOracleUserName – The target RDS username TargetRDSOracleDBPassword – The target RDS password TargetOracleDBPort – The target RDS database port Under VPC, subnet, and security group configuration, enter the following parameters: VPCID – The VPC for the replication instance VPCSecurityGroupId – The VPC Security Group for the replication instance DMSSubnet1 – The subnet for Availability Zone 1 DMSSubnet2 – The subnet for Availability Zone 2 Choose Next. On the Configure Stack Options page, for Tags, enter any optional values. Choose Next. On the Review page, select the check box for I acknowledge that AWS CloudFormation might create IAM resources with custom names. Choose Create stack. The provisioning should complete in approximately 5 to 10 minutes. It is complete when the AWS CloudFormation Stacks console shows Create Complete. From the AWS Management Console, choose Services. Choose Database Migration Services. Under Resource management, choose Replication Instances. The following screenshot shows the Replication instances page, which you can use to check the output. Under Resource management, choose Endpoints. The following screenshot shows the Endpoints page, in which you can see both the source and target endpoints. After the source and target endpoints shows status as Active, you should test the connectivity. Choose Run test for each endpoint to make sure that the status shows as successful. You have now created AWS DMS replication instances along with its source and target endpoints and performed the endpoint connectivity test to make sure they can make successful connections. Migrating the source database schema to the target database You can now migrate the Oracle database schema to the Amazon RDS for Oracle database by using Oracle Data Pump. Oracle Data Pump provides a server-side infrastructure for fast data and metadata movement between Oracle databases. It is ideal for large databases where high-performance data movement offers significant time savings to database administrators. Data Pump automatically manages multiple parallel streams of unload and load for maximum throughput. Exporting the data When the source database is online and actively used by the application, start the data export with Oracle Data Pump from the source on-premises Oracle database. You must also generate the SCN from your source database to use the SCN in the data pump export for data consistency and in AWS DMS as a starting point for change data capture. To export the database schema, complete the following steps: Enter the following SQL statement to generate the current SCN from your source database: SQL> SELECT current_scn FROM v$database; CURRENT_SCN ----------- 7097405 Record the generated SCN to use when you export the data and for AWS DMS. Create a parameter file to export the schema. See the content of the parameter file: # Use the generated SCN in step#1 for the flashback_scn parameter and create the required database directory if default DATA_PUMP_DIR database directory is not being used. $ cat export_sample_user.par userid=dms_sample/dms_sample directory=DATA_PUMP_DIR logfile=export_dms_sample_user.log dumpfile=export_dms_sample_data_%U.dmp schemas=DMS_SAMPLE flashback_scn=7097405 Execute the export using the expdp utility. See the following code: $ expdp parfile=export_sample_user.par Export: Release 12.2.0.1.0 - Production on Wed Oct 2 01:46:05 2019 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production FLASHBACK automatically enabled to preserve database integrity. Starting "DMS_SAMPLE"."SYS_EXPORT_SCHEMA_01": dms_sample/******** parfile=export_sample_user.par . . . . Master table "DMS_SAMPLE"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for DMS_SAMPLE.SYS_EXPORT_SCHEMA_01 is: /u03/app/backup/expdp_dump/export_dms_sample_data_01.dmp Job "DMS_SAMPLE"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Oct 2 01:47:27 2019 elapsed 0 00:01:20 Transferring the dump file to the target instance There are multiple ways to transfer your Oracle Data Pump export files to your Amazon RDS for Oracle instance. You can transfer your files using either the Oracle DBMS_FILE_TRANSFER utility or the Amazon S3 integration feature. Transferring the dump file with DBMS_FILE_TRANSFER You can transfer your data pump files directly to the RDS instance by using the DBMS_FILE_TRANSFER utility. You must create a database link between the on-premises and the Amazon RDS for Oracle database instance. The following code creates a database link ORARDSDB that connects to the RDS master user at the target DB instance: $ sqlplus / as sysdba SQL> create database link orardsdb connect to admin identified by "xxxxxx" using '(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = database-1.xxxxxxxx.us-east-1.rds.amazonaws.com)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))'; Database link created. Test the database link to make sure you can connect using sqlplus. See the following code: SQL> select name from v$database@orardsdb; NAME --------- ORCL To copy the dump file over to Amazon RDS for Oracle database, you can either use the default DATA_PUMP_DIR directory or you can create your own directory using the following code: exec rdsadmin.rdsadmin_util.create_directory(p_directory_name => ‘TARGET_PUMP_DIR’); The following script copies a dump file named export_dms_sample_data_01.dmp from the source instance to a target Amazon RDS for Oracle database using the database link named orardsdb. [oracle@ip-172-31-45-39 ~]$ sqlplus / as sysdba SQL> BEGIN DBMS_FILE_TRANSFER.PUT_FILE( source_directory_object => 'DATA_PUMP_DIR', source_file_name => 'export_dms_sample_data_01.dmp', destination_directory_object => 'TARGET_PUMP_DIR’', destination_file_name => 'export_dms_sample_data_01.dmp', destination_database => 'orardsdb' ); END; / PL/SQL procedure successfully completed. After the above PL/SQL procedure completes, you can list the data dump file in the Amazon RDS for Oracle database directly with the following code: SQL> select * from table (rdsadmin.rds_file_util.listdir(p_directory => ‘TARGET_PUMP_DIR’)); Transferring the dump file with S3 integration With S3 integration, you can transfer your Oracle Data dump files directly to your Amazon RDS for Oracle instance. After you export your data from your source DB instance, you can upload your data pump files to your S3 bucket, download the files from your S3 bucket to the Amazon RDS for Oracle instance, and perform the import. You can also use this integration feature to transfer your data dump files from your Amazon RDS for Oracle DB instance to your on-premises database server. The Amazon RDS for Oracle instance must have access to an S3 bucket to work with Amazon RDS for Oracle S3 integration and S3. Create an IAM policy and an IAM role. Grant your IAM policy with GetObject, ListBucket, and PutObject. Create the IAM role and attach the policy to the role. To use Amazon RDS for Oracle integration with S3, your Amazon RDS for Oracle instance must be associated with an option group that includes the S3_INTEGRATION option. To create the Amazon RDS option group, complete the following steps: On the Amazon RDS console, under Options group, choose Create Under Option group details, for name, enter the name of your option group. This post enters rds-oracle12r2-option-group. For Description, enter a description of your group. For Engine, choose the engine for the target Amazon RDS for Oracle database to migrate. This post chooses oracle-ee. For Major engine version, choose the engine version. This post chooses 12.2. Choose Create. After the option group is created, you must add the S3_Integration option to the option group. Complete the following steps: On the RDS console, choose Option Group. Choose the group that you created. Choose Add option. For Option, choose S3_INTEGRATION. For Version, choose 1.0. For Apply Immediately, select Yes. Choose Add Option. After you add S3_Integration to the option group, you must modify your target Amazon RDS for Oracle database to use the new option group. Complete the following steps to add the option group to your existing Amazon RDS for Oracle database: On the RDS console, under Databases, choose the DB instance that you want to modify. Choose Modify. The Modify DB Instance page appears. Under Database options, for Option Group, select the newly created option group that you created. Choose Continue. Under Scheduling of modifications, choose Apply immediately. Choose Modify DB Instance. When the Amazon RDS for Oracle database reflects the new option group, you must associate your IAM role and S3_Integration features with your DB instance. Complete the following steps: On the RDS console, choose your DB instance. Under the Connectivity and Security tab, choose Manage IAM roles. For Add IAM role to this instance, choose RDS_S3_Integration_Role (the role that you created). For Features, choose S3_INTEGRATION. Choose Add role. After the IAM role and S3 integration feature are associated with your Amazon RDS for Oracle database, you are done integrating S3 with the Amazon RDS for Oracle database. You can now upload the data dump files from your on-premises Oracle database instance to S3 with the following code: $ aws s3 cp export_dms_sample_data_01.dmp s3://mydbs3bucket/dmssample/ upload: ./export_dms_sample_data_01.dmp to s3:// mydbs3bucket/dmssample//export_dms_sample_data_01.dmp After you upload the data dump files to the S3 bucket, connect to your target database instance and upload the data pump files from S3 to DATA_PUMP_DIR of your target instance. See the following code: SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3( p_bucket_name => 'mydbs3bucket', p_s3_prefix => 'dmssample/export_dms_sample_data_01', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL; This gives you the task ID 1572302128132-3676. Verify the status of the file you uploaded to the Amazon RDS for Oracle instance with the following SQL query: SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1572302364019-3676.log')); After the above SQL query output shows file downloaded successfully, you can list the data pump file in Amazon RDS for Oracle database  with the following code: SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR(‘DATA_PUMP_DIR’)) order by mtime; Starting the import After the data dump file is available, create the roles, schemas and tablespaces onto the target Amazon RDS for Oracle database before you initiate the import. Connect to the target Amazon RDS for Oracle database with the RDS master user account to perform the import. Add the Amazon RDS for Oracle database tns-entry to the tnsnames.ora and using the name of the connection string to perform the import. You can add a remap of the tablespace and schema accordingly if you want to import into another tablespace or with another schema name. Start the import into Amazon RDS for Oracle from on-premises using the connection string name as shown in following code: $ impdp admin@orardsdb directory=DATA_PUMP_DIR logfile=import.log dumpfile=export_dms_sample_data_01.dmp Import: Release 12.2.0.1.0 - Production on Wed Oct 2 01:52:01 2019 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "ADMIN"."SYS_IMPORT_FULL_01": admin/********@orardsdb directory=DATA_PUMP_DIR logfile=import.log dumpfile=export_dms_sample_data_01.dmp Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . . Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Post-import checks and validation To validate that the import has completed successfully, review the import log file for any errors. Also, compare details such as the source and target database objects, row count, and invalid objects and recompile if there are any invalid objects. After the import has completed successfully, to avoid data inconsistency, disable triggers and foreign keys on the target Amazon RDS for Oracle database for the relevant schema, to prepare the target database for the AWS DMS replication. Creating the AWS DMS migration task Create the AWS DMS migration task with the following steps: On the AWS DMS console, under Conversion & migration, choose Database migration task. Under Task configuration, for Task identifier, enter your task identifier. For Replication Instance, choose the DMS replication instance that you created. For Source database endpoint, choose your source endpoint. For Target database endpoint, choose your target Amazon RDS for Oracle database. For Migration type, choose Replicate data changes only. Under Task settings, select Specify log sequence number. For System change number, enter the Oracle database SCN that you generated from the source Oracle database. Select Enable validation. Select Enable CloudWatch Logs. This allows you to validate the data and Amazon CloudWatch Logs to review the AWS DMS replication instance logs. Under Selection rules, complete the following: For Schema, choose Enter a schema. For Schema name, enter DMS_SAMPLE. For Table name, enter %. For Action, choose Include. Under Transformation rules, complete the following: For Target, choose Table. For Scheme name, choose Enter a schema. For Schema name, enter DMS_SAMPLE. For Action, choose Rename to. Choose Create task. After you create the task, it migrates the CDC to the Amazon RDS for Oracle database instance from the SCN that you provided under CDC start mode. You can also verify by reviewing the CloudWatch Logs. The following screenshot shows the log details of your migration. Data validation AWS DMS does data validation to confirm that your data successfully migrated the source database to the target. You can check the Table statistics page to determine the DML changes that occurred after the AWS DMS task started. During data validation, AWS DMS compares each row in the source with its corresponding row at the target, and verifies that those rows contain the same data. To accomplish this, AWS DMS issues the appropriate queries to retrieve the data. The following screenshot shows the Table statistics page and its relevant entries. You can also count and compare the number of records in the source and target databases to confirm that the CDC data is replicated from the source to the target database. During the planned maintenance window, you can turn off all the applications pointing to the source database and enable the triggers and foreign key constraints using the following code: -- Run the below statement to generate list of triggers to be enabled select 'alter trigger '||owner||'.'||trigger_name|| ' enable;' from dba_triggers where owner='DMS_SAMPLE'; -- Run the below statement to generate list of constraints to be enabled select 'alter table '||owner||'.'||table_name||' enable constraint '||constraint_name ||';' from dba_constraints where owner='DMS_SAMPLE' and constraint_type='R'; As DMS does not replicate incremental sequence numbers during CDC from source database, you will need to generate the latest sequence value from the source for all the sequences and apply it on the target Amazon RDS for Oracle database to avoid sequence value inconsistencies. Now, point the application to the target Amazon RDS for Oracle database by modifying the connection details. After you bring up the application, you should see that all application connections are now established on the target Amazon RDS for Oracle database. After you confirm that connections no longer exist on the source database, you can stop the source database. Summary This post demonstrated how to migrate an on-premises Oracle database to an Amazon RDS for Oracle database by using the Oracle Data Pump and AWS DMS with minimal to no downtime. You can migrate and replicate your critical databases seamlessly to Amazon RDS by using AWS DMS and its CDC feature. We encourage you to try this solution and take advantage of all the benefits of using AWS DMS with Oracle databases. For more information, see Getting started with AWS Database Migration Service and Best Practices for AWS Database Migration Service. For more information on Oracle Database Migration, refer to the guide Migrating Oracle Databases to the AWS Cloud. Please feel free to reach out with questions or requests in the comments. Happy migrating!   About the Authors   Sagar Patel is a Database Specialty Architect with the Professional Services team at Amazon Web Services. He works as a database migration specialist to provide technical guidance and help Amazon customers to migrate their on-premises databases to AWS.        Sharath Lingareddy is Database Architect with the Professional Services team at Amazon Web Services. He has provided solutions using Oracle, PostgreSQL, Amazon RDS. His focus area is homogeneous and heterogeneous migrations of on-premise databases to Amazon RDS and Aurora PostgreSQL.       Jeevith Anumalla is an Oracle Database Cloud Architect with the Professional Services team at Amazon Web Services. He works as database migration specialist to help internal and external Amazon customers to move their on-premises database environment to AWS data stores.       https://probdm.com/site/MTM5NTY
0 notes
terabitweb · 6 years ago
Text
Original Post from Amazon Security Author: Ben Romano
Note: This blog provides an alternate solution to Visualizing Amazon GuardDuty Findings, in which the authors describe how to build an Amazon Elasticsearch Service-powered Kibana dashboard to ingest and visualize Amazon GuardDuty findings.
Amazon GuardDuty is a managed threat detection service powered by machine learning that can monitor your AWS environment with just a few clicks. GuardDuty can identify threats such as unusual API calls or potentially unauthorized users attempting to access your servers. Many customers also like to visualize their findings in order to generate additional meaningful insights. For example, you might track resources affected by security threats to see how they evolve over time.
In this post, we provide a solution to ingest, process, and visualize your GuardDuty finding logs in a completely serverless fashion. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers. Our solution covers how to build a pipeline that ingests findings into Amazon Simple Storage Service (Amazon S3), transforms their nested JSON structure into tabular form using Amazon Athena and AWS Glue, and creates visualizations using Amazon QuickSight. We aim to provide both an easy-to-implement and cost-effective solution for consuming and analyzing your GuardDuty findings, and to more generally showcase a repeatable example for processing and visualizing many types of complex JSON logs.
Many customers already maintain centralized logging solutions using Amazon Elasticsearch Service (Amazon ES). If you want to incorporate GuardDuty findings with an existing solution, we recommend referencing this blog post to get started. If you don’t have an existing solution or previous experience with Amazon ES, if you prefer to use serverless technologies, or if you’re familiar with more traditional business intelligence tools, read on!
Before you get started
To follow along with this post, you’ll need to enable GuardDuty in order to start generating findings. See Setting Up Amazon GuardDuty for details if you haven’t already done so. Once enabled, GuardDuty will automatically generate findings as events occur. If you have public-facing compute resources in the same region in which you’ve enabled GuardDuty, you may soon find that they are being scanned quite often. All the more reason to continue reading!
You’ll also need Amazon QuickSight enabled in your account for the visualization sections of this post. You can find instructions in Setting Up Amazon QuickSight.
Architecture from end to end
  Figure 1: Complete architecture from findings to visualization
Figure 1 highlights the solution architecture, from finding generation all the way through final visualization. The steps are as follows:
Deliver GuardDuty findings to Amazon CloudWatch Events
Push GuardDuty Events to S3 using Amazon Kinesis Data Firehose
Use AWS Lambda to reorganize S3 folder structure
Catalog your GuardDuty findings using AWS Glue
Configure Views with Amazon Athena
Build a GuardDuty findings dashboard in Amazon QuickSight
Below, we’ve included an AWS CloudFormation template to launch a complete ingest pipeline (Steps 1-4) so that we can focus this post on the steps dedicated to building the actual visualizations (Steps 5-6). We cover steps 1-4 briefly in the next section to provide context, and we provide links to the pertinent pages in the documentation for those of you interested in building your own pipeline.  
Ingest (Steps 1-4): Get Amazon GuardDuty findings into Amazon S3 and AWS Glue Data Catalog
  Figure 2: In this section, we’ll cover the services highlighted in blue
Step 1: Deliver GuardDuty findings to Amazon CloudWatch Events
GuardDuty has integration with and can deliver findings to Amazon CloudWatch Events. To perform this manually, follow the instructions in Creating a CloudWatch Events Rule and Target for GuardDuty.
Step 2: Push GuardDuty events to Amazon S3 using Kinesis Data Firehose
Amazon CloudWatch Events can write to an Kinesis Data Firehose delivery stream to store your GuardDuty events in S3, where you can use AWS Lambda, AWS Glue, and Amazon Athena to build the queries you’ll need to visualize the data. You can create your own delivery stream by following the instructions in Creating a Kinesis Data Firehose Delivery Stream and then adding it as a target for CloudWatch Events.
Step 3: Use AWS Lambda to reorganize Amazon S3 folder structure
Kinesis Data Firehose will automatically create a datetime-based file hierarchy to organize the findings as they come in. Due to the variability of the GuardDuty finding types, we recommend reorganizing the file hierarchy with a folder for each finding type, with separate datetime subfolders for each. This will make it easier to target findings that you want to focus on in your visualization. The provided AWS CloudFormation template utilizes an AWS Lambda function to rewrite the files in a new hierarchy as new files are written to S3. You can use the code provided in it along with Using AWS Lambda with S3 to trigger your own function that reorganizes the data. Once the Lambda function has run, the S3 bucket structure should look similar to the structure we show in figure 3.  
Figure 3: Sample S3 bucket structure
Step 4: Catalog the GuardDuty findings using AWS Glue
With the reorganized findings stored in S3, use an AWS Glue crawler to scan and catalog each finding type. The CloudFormation template we provided schedules the crawler to run once a day. You can also run it on demand as needed. To build your own crawler, refer to Cataloging Tables with a Crawler. Assuming GuardDuty has generated findings in your account, you can navigate to the GuardDuty findings database in the AWS Glue Data Catalog. It should look something like figure 4:  
Figure 4: List of finding type tables in the AWS Glue Catalog
Note: Because AWS Glue crawlers will attempt to combine similar data into one table, you might need to generate sample findings to ensure enough variability for each finding type to have its own table. If you only intend to build your dashboard from a small subset of finding types, you can opt to just edit the crawler to have multiple data sources and specify the folder path for each desired finding type.
Explore the table structure
Before moving on to the next step, take some time to explore the schema structure of the tables. Selecting one of the tables will bring you to a page that looks like what’s shown in figure 5.  
Figure 5: Schema information for a single finding table
You should see that most of the columns contain basic information about each finding, but there’s a column named detail that is of type struct. Select it to expand, as shown in figure 6.  
Figure 6: The “detail” column expanded
Ah, this is where the interesting information is tucked away! The tables for each finding may differ slightly, but in all cases the detail column will hold the bulk of the information you’ll want to visualize. See GuardDuty Active Finding Types for information on what you should expect to find in the logs for each finding type. In the next step, we’ll focus on unpacking detail to prepare it for visualization!
Process (Step 5): Unpack nested JSON and configure views with Amazon Athena
  Figure 7: In this section, we’ll cover the services highlighted in blue
Note: This step picks up where the CloudFormation template finishes
Explore the table structure (again) in the Amazon Athena console
Begin by navigating to Athena from the AWS Management Console. Once there, you should see a drop-down menu with a list of databases. These are the same databases that are available in the AWS Glue Data Catalog. Choose the database with your GuardDuty findings and expand a table.  
Figure 8: Expanded table in the Athena console
This should look very familiar to the table information you explored in step 4, including the detail struct!
You’ll need a method to unpack the struct in order to effectively visualize the data. There are many methods and tools to approach this problem. One that we recommend (and will show) is to use SQL queries within Athena to construct tabular views. This approach will allow you to push the bulk of the processing work to Athena. It will also allow you to simplify building visualizations when using Amazon QuickSight by providing a more conventional tabular format.
Extract details for use in visualization using SQL
The following examples contain SQL statements that will provide everything necessary to extract the necessary fields from the detail struct of the Recon:EC2/PortProbeUnprotectedPort finding to build the Amazon QuickSight dashboard we showcase in the next section. The examples also cover most of the operations you’ll need to work with the elements found in GuardDuty findings (such as deeply nested data with lists), and they serve as a good starting point for constructing your own custom queries. In general, you’ll want to traverse the nested layers (i.e. root.detail.service.count) and create new records for each item in an embedded list that you want to target using the UNNEST function. See this blog for even more examples of constructing queries on complex JSON data using Amazon Athena.
Simply copy the SQL statements that you want into the Athena query field to build the port_probe_geo and affected_instances views.
Note: If your account has yet to generate Recon:EC2/PortProbeUnprotectedPort findings, you can generate sample findings to follow along.
CREATE OR REPLACE VIEW "port_probe_geo" AS WITH getportdetails AS ( SELECT id, portdetails FROM by_finding_type CROSS JOIN UNNEST(detail.service.action.portProbeAction.portProbeDetails) WITH ORDINALITY AS p (portdetails, portdetailsindex) ) SELECT root.id AS id, root.region AS region, root.time AS time, root.detail.type AS type, root.detail.service.count AS count, portdetails.localportdetails.port AS localport, portdetails.localportdetails.portname AS localportname, portdetails.remoteipdetails.geolocation.lon AS longitude, portdetails.remoteipdetails.geolocation.lat AS latitude, portdetails.remoteipdetails.country.countryname AS country, portdetails.remoteipdetails.city.cityname AS city FROM by_finding_type as root, getPortDetails WHERE root.id = getportdetails.id
CREATE OR REPLACE VIEW "affected_instances" AS SELECT max(root.detail.service.count) AS count, date_parse(root.time,'%Y-%m-%dT%H:%i:%sZ') as time, root.detail.resource.instancedetails.instanceid FROM recon_ec2_portprobeunprotectedport AS root GROUP BY root.detail.resource.instancedetails.instanceid, time
Visualize (Step 6): Build a GuardDuty findings dashboard in Amazon QuickSight
  Figure 9: In this section we will cover the services highlighted in blue
Now that you’ve created tabular views using Athena, you can jump into Amazon QuickSight from the AWS Management Console and begin visualizing! If you haven’t already done so, enable Amazon QuickSight in your account by following the instructions for Setting Up Amazon QuickSight.
For this example, we’ll leverage the geo_port_probe view to build a geographic visualization and see the locations from which nefarious actors are launching port probes.
Creating an analysis
In the upper left-hand corner of the Amazon QuickSight console select New analysis and then New data set.  
Figure 10: Create a new analysis
To utilize the views you built in the previous step, select Athena as the data source. Give your data source a name (in our example, we use “port probe geo”), and select the database that contains the views you created in the previous section. Then select Visualize.  
Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!
  Figure 12: Select the “port prob geo view” you created in step 5
Viz time!
From the Visual types menu in the bottom left corner, select the globe icon to create a map. Then select the latitude and longitude geospatial coordinates. Choose count (with a max aggregation) for size. Finally, select localportname to break the data down by color.  
Figure 13: A visual containing a map of port probe scans in Amazon QuickSight
Voila! A detailed map of your environment’s attackers!
Build out a dashboard
Once you like how everything looks, you can move on to adding more visuals to create a full monitoring dashboard.
To add another visual to the analysis, select Add and then Add visual.  
Figure 14: Add another visual using the ‘Add’ option from the Amazon QuickSight menu bar
If the new visual will use the same dataset, then you can immediately start selecting fields to build it. If you want to create a visual from a different data set (our example dashboard below adds the affected_instances view), follow the Creating Data Sets guide to add a new data set. Then return to the current analysis and associate the data set with the analysis by selecting the pencil icon shown below and selecting Add data set.  
Figure 15: Adding a new data set to your Amazon QuickSight analysis
Repeat this process until you’ve built out everything you need in your monitoring dashboard. Once it’s completed, you can publish the dashboard by selecting Share and then Publish dashboard.  
Figure 16: Publish your dashboard using the “Share” option of the Amazon QuickSight menu
Here’s an example of a dashboard we created using the port_probe_geo and affected_instances views:  
Figure 17: An example dashboard created using the “port_probe_geo” and “affected_instances” views
What does something like this cost?
To get an idea of the scale of the cost, we’ve provided a small pricing example (accurate as of the writing of this blog) that assumes 10,000 GuardDuty findings per month with an average payload size of 5KB.
Service Pricing Structure Amount Consumed Total Cost Amazon CloudWatch Events $1 per million events/td> 10000 events $0.01 Amazon Kinesis Data Firehose $0.029 per GB ingested 0.05GB ingested $0.00145 Amazon S3 $0.029 per GB stored per month 0.1GB stored $0.00230 AWS Lambda First million invocations free ~200 invocations $0 Amazon Athena $5 per TB Scanned 0.003TB scanned (Assume 2 full data scans per day to refresh views) $0.015 AWS Glue $0.44 per DPU hour (2 DPU minimum and 10 minute minimum) = $0.15 per crawler run 30 crawler runs $4.50 Total Processing Cost $4.53
Oh, the joys of a consumption-based model: Less than five dollars per month for all of that processing!
From here, all that remains are your visualization costs using Amazon QuickSight. This pricing is highly dependent upon your number of users and their respective usage patterns. See the Amazon QuickSight pricing page for more specific details.
Summary
In this post, we demonstrated how you can ingest your GuardDuty findings into S3, process them with AWS Glue and Amazon Athena, and visualize with Amazon QuickSight. All serverless! Each portion of what we showed can be used in tandem or on its own for this or many other data sets. Go launch the template and get started monitoring your AWS environment!
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Ben Romano
Ben is a Solutions Architect in AWS supporting customers in their journey to the cloud with a focus on big data solutions. Ben loves to delight customers by diving deep on AWS technologies and helping them achieve their business and technology objectives.
Jimmy Boyle
Jimmy is a Solutions Architect in AWS with a background in software development. He enjoys working with all things serverless because he doesn’t have to maintain infrastructure. Jimmy enjoys delighting customers to drive their business forward and design solutions that will scale as their business grows.
Go to Source Author: Ben Romano How to visualize Amazon GuardDuty findings: serverless edition Original Post from Amazon Security Author: Ben Romano Note: This blog provides an alternate solution to…
0 notes
deplloyer · 7 years ago
Text
Online migration from AWS RDS MySQL to Azure Database for MySQL
We recently announced public preview support for online migrations of MySQL to Azure Database for MySQL by using the Azure Database Migration Service (DMS). Customers can migrate their MySQL workloads hosted on premises, on virtual machines, or on AWS RDS to Azure Database for MySQL while the source databases remain online. This minimizes application downtime and reduces the SLA impact for customers.
Conceptionally, online migration with minimum downtime in DMS uses the following process:
Migrate initial load using bulk copy.
While initial load is being migrated, incoming changes are being cache and applied after initial load completes.
Changes in source database continue to replicate to target database until user decides to cutover.
During a planned maintenance window, stop the new transactions coming into source database. Application downtime starts with this step.
Wait for DMS to replicate the last batch of data.
Complete application cutover by updating the connection string to point to your instance of Azure Database for MySQL.
Bring the application online.
Below are prerequisites setting up DMS and step-by-step instructions for connecting to a MySQL database source on AWS RDS.
Prerequisites
Create an instance of Azure Database for MySQL. Refer to the article Use MySQL Workbench to connect and query data for details on how to connect and create a database using the Azure portal.
Create a VNET for the Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either ExpressRoute or VPN.
Ensure that your Azure Virtual Network (VNET) Network Security Group rules do not block the following communication ports 443, 53, 9354, 445, 12000. For more detail on Azure VNET NSG traffic filtering, see the article Filter network traffic with network security groups.
Configure your Windows Firewall for database engine access.
Open your Windows firewall to allow the Azure Database Migration Service to access the source MySQL Server, which by default is TCP port 3306.
When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration.
Create a server-level firewall rule for the Azure Database for MySQL to allow the Azure Database Migration Service access to the target databases. Provide the subnet range of the VNET used for the Azure Database Migration Service.
The source MySQL must be on a supported MySQL community edition. To determine the version of the MySQL instance, run SELECT @@version; in the MySQL utility or MySQL Workbench.
Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article Converting Tables from MyISAM to InnoDB.
Setting up AWS RDS MySQL for replication
Use the following instructions provided by AWS to create a new parameter group (see the binary logging format section).
MySQL Database Log Files - Amazon Relational Database Service
When creating the new parameter group, set the following values:
binlog_format = row
binlog_checksum= NONE
Save the new parameter group
Pre-migration steps and setting up DMS
Please refer to this tutorial to continue with schema migration, setting up DMS to do data movement and how to monitor data movement.
For known issues and workarounds, please refer to this document.
Congratulations, you have just performed a MySQL migration from AWS RDS to Azure Database for MySQL successfully!For more information, refer to the resources below. If you have any questions, please email the DMS Feedback alias at [email protected].
Resources
Tutorial: Migrating from SQL Server to Azure SQL Database online migration
Tutorial: Migrating from MySQL to Azure Database for MySQL online migration
How to monitor an online migration
Sign in on Azure portal and set up an instance of DMS for free. We are constantly adding new database source/target pairs to DMS. Stay up-to-date on #AzureDMS news and follow us on Twitter @Data_Migrations. Join the Azure Database Migration Service (DMS) Preview Yammer group, and give us feedback via User Voice or email by contacting us at [email protected].
from Microsoft Data Migration Blog https://ift.tt/2BZwF5d via IFTTT
0 notes
file-formats-programming · 8 years ago
Text
Set Text Properties for Tables in Presentation, Spelling & Grammar Check on Text Portion using Java
What's New in this Release?
The long awaited version of Aspose.Slides for Java 17.1.0 has been released. This is a maintenance release whereby we have resolved several issues incurring in product. It improved the text management support in this release and have added LanguageId property to setup proofing language for checking spelling and grammar on text portion level. Please visit the documentation article, Change Language for Text in Portion for further reference. It has also included the support for setting text properties for tables inside presentation. Users can now have options for setting the text format on table level , row level, and columns level. This release has also improved the support for working with charts and now users can set the custom chart data labels values from chart data workbook as well. Developers can also set the slide size by using it with different ways of content scaling. Please visit the documentation article, Setting the Slide Size with respect to Content Scaling for further reference. it has also improved the presentation rendering support in this release and have resolved certain issues related to missing or improper text rendering, missing images, missing charts and its entities in generated PDF, HTML and slide thumbnails outputs. This release has resolved the issues related to presentation saving which earlier resulted in missing or wrong fonts, missing headers, improper text, missing hyperlinks and wrong line styles for shapes in saved presentations. It has also addressed the presentation access and saving issues have been rectified for many presentation decks that resulted in exceptions including NullReferenceException, ArgumentException, NotImplementedException and PptxReadException in previous releases. Some important enhancement & bug fixes included in this release are given below
Support for Value from Cells feature for chart data labels
Support for bulk setting text properties for whole table, row or column
Set and control text spellcheck language using Aspose.Slides
Add support for changing language of presentation and shape's text
Using locale for setting the language
Language property for textboxes
Changing slides orientation has no effect on contents
Changing font related properties in master slide not getting applied
Setting multi-level chart categories not working
Unexpected subscript effect on saving presentation
System.ArgumentOutOfRangeException when adding shape to slide and saving
Exception on converting ppt to pptx or pptm
Pptx not properly converted to html
Pptx to pdf conversion giving OutOfMemoryError
Text are improperly rendered in generated PDF
EMF images are not properly rendered in generated pdf
Embedded fonts are not getting copied when cloning slide
Exception on generating thumbnails
Problem with content in result file after saving Ppt to Pptx
Cylinder drawing is changed after loading and saving a ppt
Meta files are improperly rendered in generated thumbnails
Fix implementation of ChartSeriesGroup.CompareTo() method.
Character misplaced after converting to svg
Cell border not generated as double line
Icon missing after converting slide to svg
Text in pptx document not justified properly
Bullets changes while converting odp to pdf
Creating charts from sql server table
Slide orientation went wrong
Thumbnails output cropped
NotImplementedException on saving presentation
Shapes with FillType.Group missing in the generated thumbnail
Text is improperly rendered in generated thumbnail
Bullet space changed after saving ppt
Pptx changed after converting to pdf
Exception on saving presentation
Text is not being rendered when exporting slides as thumbnails
High memory consumption while converting pptx to pdf
Incorrect character positioning in HTML representation of the presentation document in Safari for iOS
Equations are improperly rendered in generated PDF and thumbnails
Chart title appears on pptx to html
Incorrect chart on generated pdf
Date changed to asterisk when saving presentation
DataPoints of scattered chart are not showing in the generated image file
Y Axis Labels are not correct in the generated image file
Images are not rendered in HTML to PPTX Import
Exception on Opening the PPTX file. Error unexpected font parsing exception
Ppt to Pptx conversion disturbs equations
Improper gradient fill export for geometry shapes
Improper DrBrush is used when exporting gradient filled text to PDF
Gradient brush is incorrectly formed when exporting gradient-filled text
PPTX to PDF: Text is missing in generated PDF file
Footer not Visible when setting using Metacharacters
Chart improperly rendered in generated PDF
Protected view error message on generating PPT form Aspose.Slides
Improper thumbnail generated for PPT
Default font related properties are set when copying slide notes
Index out of range exception on accessing presentation
PowerPoint 2010 Error Message: PowerPoint has detected problem in file in generated PPT
ProtectedView message appears if multiple hyperlinks are added in generated presentation
Picture is missing in notes page on presentation save
Equations text overlap in the generated PDF
Mathematical equation are improperly rendered in exported PDF
Other most recent bug fixes are also included in this release
Newly added documentation pages and articles
Some new tips and articles have now been added into Aspose.Slides for Java documentation that may guide users briefly how to use Aspose.Slides for performing different tasks like the followings.
Creating a TextBox on the Slide
Setting the Size Scale of a slide
Overview: Aspose.Slides for Java
Aspose.Slides is a Java component to create, read, write and modify a PowerPoint document without using Microsoft PowerPoint. It supports PHP applications and provides all advanced features for managing presentations, slides, shapes, tables and supports PPT, POT, POS PowerPoint formats. Now you can add, access, copy, clone, edit and delete slides in your presentations. It also supports audio & video frames, adding pictures, text frames and saving presentations as streams or SVG format.
More about Aspose.Slides for Java
Homepage of Aspose.Slides for Java
Downlaod Aspose.Slides for Java
0 notes
marcosplavsczyk · 6 years ago
Link
ApexSQL Log is a SQL log analyzer tool which can be used to read different transaction log files in order to perform SQL Server database auditing, transactional replication and data/schema recovery tasks.
Auditing – with ApexSQL Log, we can look into not only the online transaction log file, but also fully investigate transaction log backup files (.trn, .bak) as well as detached transaction log files (LDF). Our SQL log analyzer allows us to not only store and export audited results in various formats, but also provides investigation mechanisms which allow users to directly view transaction log files contents and analyze it within the ApexSQL Log grid itself maximizing potential output and results
Recovery – for ApexSQL Log may seem effortless, since our SQL Log analyzer can quickly and with minimal effort roll back all inadvertent changes or lost data/structure which cannot be recovered using conventional mechanisms or SQL Servers own tools. ApexSQL Log creats “Undo” or rollback scripts which include roll-back SQL operations and will return the data or structure in the previous state
Replication – is another feature of ApexSQL Log which proved invaluable to SQL users faced with replication tasks. Our SQL log analyzer uses data audited from the transaction log files in order to generate “Redo” or roll-forward scripts which will completely mimic DML and DDL operations executed on the audited database. These scripts can then be executed over one or multiple databases in different SQL requirements in order to replay exact data and schema changes
In this article, we are going to showcase how to configure and achieve database auditing, DDL/DML recovery and transactional replication using our SQL log analyzer – ApexSQL Log.
Auditing
Any SQL Log analyzer must have powerful, top-notch auditing features which allow auditing, export and in-depth investigation transaction log files. ApexSQL Log is a SQL Server transaction log reader which audits DDL and DML operation/changes with plethora of output choices and investigation features.
To start the auditing process, the first step in ApexSQL Log is to choose the SQL Server instance, select authentication method and provide valid credentials, and select a database to audit
In the second step of the wizard, application will automatically locate and add online transaction log file, while additional transaction log files, including backups and detached LDF files can be added manually using “Add file” and “Add pattern” buttons
In the “Select output” step of the wizard, ApexSQL Log offers several auditing choices:
Export results
Create before-after report
Open results in grid
Exporting auditing results option will result in audited report being generated directly inside specified repository tables, or an auditing report will be created in one of the supported file formats (SQL, SQL bulk, XML, HTRML, CSV), depending on the user’s preference.
The “Create before-after report” option functions in a similar manner as previously mentioned “Export results” option – a report is created in specified format, based on the user’s choice, but in this case audited information and details will be centered on the before and after values on DML changes only. The before-after reports use different repository tables than those generated via “Export results” option.
The most diverse auditing option of all in SQL Log Analyzer is “Open results in grid”. When this option is used, ApexSQL Log will populate a highly comprehensive grid with the audited information which allows users direct insight into the contents of their log files. Grid-sorted results can there be further filtered, investigated in great details, both for operation details and history of changes for specific table rows and more. Both previously mentioned options can also be used to export results from the grid once the forensics and investigation has been completed.
Back to the auditing wizard, once the choice on auditing output has been made, ApexSQL Log will offer various filters in the “Filter setup” step. These filters can be used to fine-tune the auditing trail and ensure generated results hold high auditing value. Filters include but are not limited to are time-based, operations, tables, users, values, server process IDs and more
In the event of immediately generating reports with Export reports and Create before-after reports, in the final step of the wizard, users will get to choose export output and details. Additionally, ApexSQL Log can be used to create an automation script based on the wizard configuration which can be used to automate future auditing jobs using batch or PowerShell. Completing this step will finalize auditing job and auditing report will be generated in the specified format.
In cases when Open results in grid option is selected, ApexSQL Log will populate above mentioned grid with the auditing results, and users can work with the audited data directly, add additional filtering, browse/search shown results and details on each operation, forensically examine row history and more
Detailed information on exporting options and related features of ApexSQL Log, SQL log analyzer can be found in Exporting options in ApexSQL Log article.
And here are most commonly used auditing solutions
How to continuously audit transaction log file data directly into a SQL Server database
Read a SQL Server transaction log
Open LDF file and view LDF file content
Audit SQL Server database schema changes
Audit SQL Server database security changes
Recovery
Our SQL Log Analyzer comes with powerful recovery mechanisms which allow operations rollback and recovery from all inadvertent or malicious data and structure changes. Leveraging information included in the transaction log files, ApexSQL Log creates historic overview of changes and creates recovery scripts which can be used to return table values or structure to the original state.
Recovery process starts in the same way as the previously demonstrated auditing process. As before, we first connect to the SQL Server database and provide transaction logs to audit
In the output selection, in case of the recovery process, users should opt for Undo/Redo option
As before, all filters available in the auditing process can also be used during the recovery wizard
In the last step, we need to opt for the “Undo (Rollback) script” option
Once the processing is completed, ApexSQL Log will generate a recovery script for the recovery purpose
Generated recovery script can be opened in any SQL script editor, including but not limited to SQL Server Management Studio, or ApexSQL Log internal editor. Here, the script can be examined, edited and immediately executed to complete the recovery process and have damaged/lost data and structure in their original state
Here are some related articles on using ApexSQL Log recovery feature:
How to recover SQL Server data from accidental UPDATE and DELETE operations
Recover deleted SQL data from transaction logs
Recover a SQL Server database using an old backup and the current transaction log file
Recover a SQL Server database using only a transaction log file (.ldf) and old backup files
How to recover views, stored procedures, functions, and triggers
Replication
Transactional replication is another prized feature of ApexSQL Log. ApexSQL Log can be used to read the information on all DML and DDL changes from the transaction log files, and to generate a “replay” script which can be applied to a subscriber database to mimic the changes and ensure that it is in sync with the original database. Furthermore, this process can be automated using batch or PowerShell.
As was the case with two previously described use cases, replication process follows the same wizard, all the way until the output choice, where we again opt for the Undo/Redo option
After that, the next step in line is to choose the “Redo” option and set the appropriate filters. When performing a transactional replication, it is important to ensure that this job is performed continuously and without duplicating operations/entries, so in any ongoing replication jobs, the best choice would be to use “Continuous auditing” option which will always start and pick up right where the previous replication job ended and ensure uninterrupted replication
To complete the replication job, configured project can be saved or batch/PowerShell script can be automatically generated and later automated to ensure replication is performed on regular pre-determined intervals
Several approaches and solutions for transactional replications can be found in the following articles:
Hands free, no-coding SQL Server database replication of a reporting database
How to setup SQL Server database replication for a reporting server
How to set up a DDL and DML SQL Server database transactional replication solution
How to setup custom SQL Server transactional replication with a central publisher and multiple subscriber databases
0 notes
marcosplavsczyk · 6 years ago
Link
Many SQL auditing tools and solutions are available to help DBAs achieve change-auditing and compliance goals. To achieve high performance auditing fit for a specific environment, it is important to consider all of these different SQL auditing tools which use different approaches, techniques and mechanisms to audit various SQL Server operations and events in order to choose and implement a most suitable one. In this article, we are going to look and compare 5 different SQL auditing tools which leverage different SQL Server mechanisms for auditing, including embedded auditing, transaction logs, database triggers, SQL traces and more. These are:
SQL Server Audit
SQL Server Change Data Capture (CDC)
ApexSQL Trigger
ApexSQL Log
ApexSQL Audit
SQL Server Audit
SQL Server Audit is first of SQL auditing tools we’ll examine. It leverages Extended Events in order to audit events on both server and database levels. It utilizes 3 separate components for auditing:
SQL Server audit object – an object which defines the target of auditing
Database audit specification – a SQL Server audit object which specifies what exactly is audited on the database-level
Server audit specification – a SQL Server audit object which defines which exact server-level events will be audited
More detailed information on SQL Server Audit components and features, as well as general information can be found in SQL Server Audit feature – Introduction article.
SQL Server Audit is available on SQL Server 2012 or better for all editions, while also being supported by SQL 2008 for enterprise and developer versions only. Since it leverages Extended Events, the overhead is generally lightweight, but if auditing is configured to pick up large quantities of events in high-traffic databases, as with any of the SQL auditing tools that leverage Extended Events, SQL Server Audit may use more resources and affect SQL Server performance.
Configuring SQL Server audit is pretty straight forward as far as SQL auditing tools configuration goes and doesn’t require big time investment. As mentioned above, it is necessary to create aforementioned 3 components to setup auditing, which can be achieved via SQL Server Management Studio through existing auditing wizards, or via queries and SQL code. Detailed guide on how to configure SQL Server auditing can be found in How to set up and use SQL Server Auditing article.
Once the SQL Server Auditing is configured, audited data will be saved in one of the following, based on the user’s preference:
Binary file
Windows event log
SQL Server event log
Note that output information will be exactly the same, regardless of the chosen output, while output files can be read using SQL Server Management Studio, Windows Event Viewer and Log File Viewer.
More details and recommendations on how to read SQL Server Audit data can be found in How to analyze and read SQL Server Audit information article which offers recommendations and solutions on reporting for this first auditing solution in our SQL auditing tools list.
Pros:
Light weight
Easy to setup/configure
Various output choices
Audits DML, DDL, Security and other SQL Server events
Supports all transaction log recovery models (full, bulk, simple)
Cons:
The audits details don’t specify what exactly is recorded in the main SQL Server audit object component – no event types, objects and databases are specified here
Minimal info available for audited events – only time, SPID, server name, database name, and object name information is available, while critical information such are client host, IP address and other are missing
No out-of-the-box mechanisms for deletion or archiving of audited data
Tool is missing any features that would allow or help with multiple SQL Server instances and every configuration step must be individually performed and manually repeated making this time and error-prone task
Not supported on all SQL Server versions and editions
See more articles on auditing with SQL Server Audit below
SQL Server Audit Overview
SQL Server Audit feature – DDL event auditing examples
SQL Server Audit feature – discovery and architecture
Understanding the SQL Server Audit
Reviewing the SQL Server Audit
Intro to Auditing in SQL Server
SQL Server Change Data Capture (CDC)
SQL Server Change Data Capture is integrated in SQL Server and is second in our list of SQL auditing tools. It is available in SQL Server 2008 and better for enterprise editions, while standard edition is supported starting SQL 2016 SP1. SQL Server Change Data Capture periodically queries online transaction log file (LDF) in order to read information on before and after change values for insert, update and delete operations. Since reading online transaction log file is lightweight process, SQL Server Change Data Auditing doesn’t affect database performance in most cases. Drawback of this approach is that this kind of asynchronous auditing can delay transaction log file truncation (since it locks transaction log file) – changes marked for capture via CDC can’t be truncated until they are actually audited/captured, and even though this ensures that the auditing data will not be lost, preventing transaction log truncation can make it grow and affect the database operations.
Compared to some previously described SQL auditing tools on the start of our SQL auditing tools list – SQL Server Audit, SQL Server Change Data Capture does not audit most of the events audited by SQL Server Audit, and focuses only on DML operations for which is provides more information and details on audited data – audited information includes exact before and after change values, complete history of changes on a specific row which is not available in SQL Server Audit. Unfortunately, it lacks information on who made the change, when and how.
More detailed information on SQL Server Change Data Capture components and general features can be found in SQL Server Change Data Capture – Introduction article.
Audited information is stored in the repository tables which are created individually inside audited databases and there is no centralization what so ever. In case of repository tables growing too large, a specific cleanup job can be executed to purge the data “cdc.<database_name>_cleanup”.
Full guide on how to setup SQL Server Change Data Capture can be found in the How to enable and use SQL Server Change Data Capture article, the second SQL Server integrated solution in our SQL auditing tools list.
Pros:
Light weight
Supports all transaction log recovery models (full, bulk, simple)
Prevents truncation of data sources (online transaction log) until auditing is completed
Cons:
Audits only DML operations
Asynchronous auditing
Does not audit information on who made the change, from where and how
No centralization for repository and reporting
No drill-down mechanisms
Each database table must be individually configured
Requires intermediate SQL knowledge to setup and configure
Not supported on all SQL Server versions and editions
See more articles on auditing with Change Data Capture (CDC)
ApexSQL Log vs. SQL Server Change Data Capture (CDC)
ApexSQL Audit vs. SQL Server Change Data Capture (CDC)
Monitoring changes in SQL Server using change data capture
SQL Server FILESTREAM with Change Data Capture
ApexSQL Trigger – trigger-based auditing
ApexSQL Trigger is one of 3 ApexSQL SQL auditing tools which we are going to look at in this article. It utilizes database triggers to capture before and after changes on insert, update and delete operations similar to SQL Server Change Data Capture which uses online transaction log as we’ve mentioned before. ApexSQL Trigger is configured using a user-friendly interface and must be manually configured for each database and table, yet the interface is shaped to allow quick configuration making this much pleasurable and faster experience than SQL Server Change Data Capture. In addition, ApexSQL Trigger can also audit schema changes (DDL).
On the data storage, ApexSQL Trigger belongs to both decentralized and centralized SQL auditing tools since it stores audited data inside 2 database tables which can be created in the table being audited, or in a completely separate dedicated database which can be used to store audited data from multiple databases even coming from different SQL Server instances.
Information on the audited data is also more plentiful in ApexSQL Trigger in comparison to previously mentioned SQL auditing tools and the main difference is in the fact that ApexSQL Trigger provides critical information on who made the change and from where they connected which is not available in SQL Server Change Data Capture. Furthermore, ApexSQL Trigger comes with out-of-the-box reports which can be easily checked from the UI or exported without accessing SQL Server directly.
A quick guide on general features and how to setup ApexSQL Trigger, our trigger-based solution in our SQL auditing tools list and configure auditing of SQL databases for DML and DDL changes can be found in An introduction to ApexSQL Trigger article and video.
Pros:
Quick and easy to set up
Provides full information on who made the change, when, how, from where etc.
Audits both DML and DDL changes
Built-in reporting
No SQL knowledge required to setup and operate
Supports all SQL Server versions and editions from SQL 2005 onwards
Supports all transaction log recovery models (full, bulk, simple)
Cons:
Database triggers are created inside audited tables
Not a light weight performance when auditing many tables – all triggers can impose performance degradation
See more articles on ApexSQL Trigger-based auditing in these links:
Creating a “smart” trigger-based audit trail for SQL Server
ApexSQL Log – transaction log auditing
ApexSQL Log is next in live in our list of SQL auditing tools created by ApexSQL which reads not only from online transaction log files, but also from reads directly from transaction log backups and detached LDF files to create a complete history of DML and DDL changes while offering plethora of information for each audited operation. This also means that in addition to performing on-demand and continuous auditing, ApexSQL Log can perform forensic auditing as well and read from the old transaction log backups or detached LDF files which were created even before the tool was installed on the server. ApexSQL Log comes with a user-friendly UI which allows users to create and perform auditing tasks by following through a simple wizard which allows users to choose data sources for auditing, choose various filters and outputs and more.
ApexSQL Log uses continuous auditing feature to perform ongoing auditing tasks to ensure no audited data is duplicated or dropped. This process can be automated to run on a predetermined frequency by utilizing command line interface (CLI) which fully supports all ApexSQL Log features in options available in GUI. So, for those that prefer using CLI, they can completely manage and complete all auditing tasks without accessing ApexSQL Log GUI.
As was the case with the above-mentioned SQL auditing tools, ApexSQL Log also audits DML operations for before and after changes and can show both before and after values on insert, update and delete operations as well as full history of row changes for all audited table fields. While ApexSQL Log uses similar repository approach as ApexSQL Trigger and uses dedicated database tables to store audited data, ApexSQL Log also provides additional output options and can show all auditing results directly in GUI grid where the results can be examined, filtered or saved. ApexSQL Log can also export directly to SQL BULK or SQL script, HTML and CSV, while the audited data includes plethora of information for each audited operation
More information on ApexSQL Log general features can be found in An introduction to ApexSQL Log article and video.
While ApexSQL Log is primarily one of the SQL auditing tools which is a focus of this article, as some bonus features, ApexSQL Log can also roll back (undo) changes based on the information audited from the transaction log files making it an ideal auditing + recovery solution – having an auditing solution which can also be used to recover lost data in disaster scenarios is a huge boon of the tool. Additionally, not only can it roll back changes, but also replay them by creating a redo script, which can be used to replicate the changes on another database, both DML and DDL – making it a viable and easy solution for database replication.
Pros:
Belongs to light weight SQL auditing tools
Quick and easy to setup using GUI or CLI
Various output choices – directly to database, SQL BULK, SQL Script or direct reporting
Advanced drill-down mechanisms
Supports all SQL Server versions and editions from SQL 2005 onwards
No SQL knowledge required to setup and operate
Provides full information on who made the change, when, how, from where etc.
Audits both DML and DDL changes
Forensic auditing – can read changes which occurred before the tool was installed on the server
Bonus feature – disaster recovery
Bonus feature – SQL database replication
Cons:
Audited database must be in a full recovery model – to prevent deletion of the transaction log data used by ApexSQL Log as an auditing source
Asynchronous auditing
No out-of-the-box reporting available (repository tables are queried manually)
See more articles on auditing with the SQL transaction log below
Auditing by Reading the SQL Server Transaction Log
How to continuously audit transaction log file data directly into a SQL Server database
Auditing by Reading the SQL Server Transaction Log
How to continuously audit transaction log file data directly into a SQL Server database
How to continuously read Transaction log file data directly in a SQL Server database with fn_dblog and fn_dump_dblog
ApexSQL Audit – profiler/extended events based auditing
ApexSQL Audit is last but definitely not least from our list of SQL auditing tools we are going to look at in this article. It is the most robust and complete auditing solution of all we’ve presented here, and here is why:
ApexSQL Audit leverages SQL traces in order to audit almost 200 SQL Server events, by far the most of all mentioned SQL auditing tools. It audits all DML and DDL operations, queries (SELECT, SELECT INTO), security related events, backup/restore jobs, warnings and errors on all audited SQL Server instances. Additionally, it can also audit before-after values using database triggers – a feature it shares with ApexSQL Trigger.
ApexSQL Audit is a centralized solution which stores audited data for all audited SQL Servers and database in a single tamper-evident repository database which comes with numerous out-of-the-box reporting while also providing support for report customization. With ApexSQL Audit, it is possible to audit large number of SQL Servers and database within the same domain, and all control and configuration is centralized in an easy to use GUI from which both configuration and reporting can be performed.
ApexSQL Audit is one of active SQL auditing tools which performs synchronous auditing – all events are audited immediately when they are executed on the SQL Server and audited data is directed to the central repository which can be hosted either on audited or not-audited (dedicated) SQL Server.
With the above in mind, it is not a surprise that ApexSQL Audit is a valid solution which can be used to comply with different auditing and compliance standards including PCI, HIPPA, GDPR, SOX, FISMA, BASEL II, FERPA, GLBA, FDA and more.
ApexSQL Audit configuration is two-fold – auditing can be configured using a ‘simple’ filter which allows visual overview of all audited events which need to be ‘checked’ in order to be audited, or via using ‘advanced’ filter which is based on logical conditions which can be combined without limitations to achieve great precision and top granularity which allows ApexSQL Audit to enforce auditing in very specific use cases and environments.
Auditing with ApexSQL Audit is enhanced with several different fully customizable mechanism, more than any previously mentioned SQL auditing tools. All out-of-the-box reports can be run either manually or scheduled to be executed at specific times with dynamic filters. They can also be automatically sent to specific email addresses using SMTP Server to send them. Reports can also be previewed from within the application UI and be created in CSV, XLS, Word or PDF formats.
Additionally, ApexSQL Audit comes with several out-of-the-box alerts which will alert users on any potential issues while the tool is auditing. Furthermore, custom alerts can be created to raise an alert on any auditing event, whether it is an unauthorized access attempt, data loss, permission changes or more – a custom alert will be written inside Windows Event Log and can also be immediately send via email, again, using SMTP as a platform.
More information on ApexSQL Audit general features can be found in An introduction to ApexSQL Audit article and video.
Pros:
Centralized solution
Quick and easy to setup using GUI
Supports all SQL Server versions and editions from SQL 2005 onwards
Provides full information on who made the change, when, how, from where etc.
Audits almost 200 SQL Server events
Alerting on specific audited events
High-end out-of-the-box and custom reporting
Tamper-evident repository
Built-in repository maintenance and archiving features
Compliance regulation standard templates for configuration and reporting
Export, share and apply configuration between SQL Servers and databases
Supports all transaction log recovery models (full, bulk, simple)
Low overhead
Cons:
Potential performance degradation when auditing huge quantities of data and running ‘regular’ and before-after auditing processes
Beginner SQL knowledge required to setup and operate
Easy to learn, hard to master (large number of options and possibilities)
See more articles on auditing with ApexSQL Audit below
Securing access for SQL Server auditing
Various techniques to audit SQL Server databases
0 notes
marcosplavsczyk · 8 years ago
Link
This article will explain different ways of exporting data from SQL Server to the CSV file. This article will cover the following methods:
Export SQL Server data to CSV by using the SQL Server export wizard
Export SQL Server data to CSV by using the bcp Utility
Export SQL Server data to CSV by using SQL Server Reporting Services (SSRS) in SQL Server Data Tools (SSDT) within Visual Studio
Export SQL Server data to CSV by using the ApexSQL Complete Copy results as CSV option
Export SQL Server data to CSV by using SQL Server export wizard
One way to export SQL Server data to CSV is by using the SQL Server Import and Export Wizard. Go to SQL Server Management Studio (SSMS) and connect to an SQL instance. From the Object Explorer, select a database, right click and from the context menu in the Tasks sub-menu, choose the Export Data option:
The SQL Server Import and Export Wizard welcome window will be opened:
Click the Next button to proceed with exporting data.
On the Choose a Data Source window choose the data source from which you want to copy data. In our case, under the Data source drop down box, select SQL Server Native Client 11.0. In the Server name drop down box, select a SQL Server instance. In the Authentication section, choose authentication for the data source connection and from the Database drop down box, select a database from which a data will be copied. After everything is set, press the Next button:
On the Choose a Destination window, specify a location for the data that will be copied from SQL Server. Since the data from the SQL Server database will be exported to the CSV file under the Destination drop down box, select the Flat File Destination item. In the File name box, specify a CSV file where the data from a SQL Server database will be exported and click the Next button:
On the Specify Table Copy or Query window, get all data from a table/view by choosing the Copy data from one or more tables or views radio button or to specify which data will be exported to the CSV file by writing an SQL query by choosing the Write a query to specify the data to transfer radio button. For this example, the Copy data from one or more tables or views radio button is chosen. To continue, press the Next button:
Under the Configure Flat File Destination window, choose the table or view from the Source table or view drop down box for exporting data to the CSV file:
To view which data will be exported to the CSV file, click the Preview button. The Preview Data window will appear with data that will be exported:
If you are satisfied with the preview data, click the Next button in order to continue with exporting data. The Save and Run Package window will appear. Leave settings as they are and click the Next button:
The Complete Wizard window shows the list of choices that were made during of exporting process:
To data from SQL Server to CSV file, press the Finish button. The last window shows information about exporting process, was it successful or not. In this case, the exporting process was finished successfully:
On the image below, the ExportData.csv file in Excel and Notepad is shown with the exported data:
SQL Server Import and Export Wizard can be initiated without using SSMS, go to start and type word “Export”, from the search results choose 64-bit or 32-bit version of SQL Server Import and Export Wizard:
Export SQL Server data to CSV by using the bcp Utility
The bcp (bulk copy program) utility is used to copy data between SQL Server instance and data file. With the bcp utility, a user can export data from an instance of SQL Server to a data file or import data from a data file to SQL Server tables.
To start export SQL data to CSV file, first open Command Prompt (cmd), go to start and type cmd and click on the Command Prompt item:
The Command Prompt window will appear:
Then type bcp ? and press the Enter key in order to see if everything works as it should. In our case, an error occurs:
As it can be seen from the error message box, the msodbcsql13.dll file is missing. To resolve this problem, download and install Microsoft ODBC Driver 13 for SQL Server.
Now, when in the Command Prompt window, the bcp ? command is executed, the following information will appear:
The screen above shows all the different switches that can be used in bcp utility. So, let’s use some of these switches and export SQL Server data to CSV.
In the Command Prompt window, type the word bcp followed by the name of the SQL table from which exporting data should be done by typing the following steps, first type the name of the database which contains the table from which you want to export data, followed by dot. After the dot, type the schema name of the table, after the schema name, type dot and after the dot, type the table name which contains data for exporting (e.g. AdventureWorks2014.Person.AddressType):
After the name of the SQL table, press the Space key and type the word out:
out copy data from the database table or view to a specified file.
Also, the queryout command exists which copies data from an SQL query to a specified file.
The in command copies data from a file to a specified database table.
After the out command, add a location of a CSV file where the data from the SQL table will be placed, for example (C:\Test\ExportData.csv)
Now when the csv file is specified, there are a few more switches that need to be included in order to export SQL Server data to CSV file.
After the CSV file type the -S switch and the name of an SQL Server instance to which to connect (e.g. WIN10\SQLEXPRESS):
Then type the -c switch and, after that, type the -t switch to set the field terminator which will separate each column in an exported file. In the example, the comma ( , ) separator will be used:
At the end, enter a switch which determines how it will be accessed to the SQL Server. If the -T switch is put, that means the trusted (Windows authentication) will be used to access to SQL Server. For SQL authentication, use the -U switch for the SQL Server user and -P for the SQL Server user password.
In this example, the trusted connection (-T switch) will be used:
Now, when the Enter key is pressed, the similar message will appear with information about copied data:
To confirm that data from a specified table have been copied to CSV file, go to the location where the file is created, in our case that will be C:\Test:
And open the ExportData.csv file:
Export SQL Server data to CSV by using SQL Server Reporting Services (SSRS) in SQL Server Data Tools (SSDT) within Visual Studio
SSRS allows to save exported data in one of the following formats PDF, Excel, XML, MHTML, Word, CSV, PowerPoint and TIFF format.
To start creating a report server project first open SSDT. Go to File menu and under the New sub-menu, choose the Project option:
Under Business Intelligence, select the Reporting Services item and on the right side, choose the Report Server Project Wizard option:
Note: In any case, the Business Intelligence or Report Server Project Wizard options don’t appear, SSDT needs to be updated with the Business Intelligence templates. More about this can be found on the Download SQL Server Data Tools (SSDT) page.
In the Name box, enter the name of the project (e.g. ExportData) and in the Location box, choose where the project will be created:
After that is set, press the OK button, the Report Wizard window appears, press the Next button to continue:
The Select the Data Source window will appear:
In the Connection string box, a connection string to the SQL Server database can be entered from which a report can be created or press the Edit button on the Select the Data Source window and in the Connection Properties window, set the connection string to the desired database, like from the image below, and press the OK button:
This will be in the Connection string box on the Select the Data Source window. Press the Next button to continue with the settings:
In the Design the Query window, specify a query to execute to get data for the report. There are two ways for getting a query to execute. One way is to use the Query Designer window by clicking the Query Builder button on the Design the Query window:
And the second way is to type the desired query in the Query string box:
After setting the query, click the Next button. On the Select the Report Type window, leave default settings and press the Next button:
On the Design the Table window, we will leave everything as it is and press the Next button:
The Completing the Wizard window shows all steps/settings that are taken during the process creating the report. Press the Finish button to create the report:
After we pressed the Finish button, the created report will show. Under the Preview tab, click the Export button and, from the menu, choose in which format generated data will be exported (e.g.CSV):
Export SQL Server data to CSV by using the ApexSQL Complete Copy results as CSV option
The Copy code as is a feature in ApexSQL Complete, a free add-in for SSMS and Visual Studio, that copy the data from the Results grid to a clipboard in one of the following data files: CSV, XML, HTML in just one click.
In a query editor, type the following code and execute:
USE AdventureWorks2014 SELECT at.* FROM Person.AddressType at
The following results will be displayed in the Results grid:
In the Results grid, select the part or all data, right click and from the context menu, under the Copy results as sub-menu, choose the CSV command:
This will copy the selected data from the Results grid to the clipboard. Now, all that needs to be done is to create a file where the copied data should be pasted:
The ApexSQL Complete Copy code as an option can save you a great amount of time when you need to copy repetitive SQL data to another data format.
See also:
Import and Export Bulk Data by Using the bcp Utility (SQL Server)
Start the SQL Server Import and Export Wizard
SQL Server Data Tools
Create a Basic Table Report (SSRS Tutorial)
  The post How to export SQL Server data to a CSV file appeared first on Solution center.
0 notes
marcosplavsczyk · 8 years ago
Link
This article explains how to create customized policies for index defragmentation jobs for SQL Server
Introduction
When creating indexes, database administrators should look for the best settings to ensure minimal performance impact and degradation. However, over time, indexes will get fragmented, which can severely impact server performance.
Regular index maintenance is important in these cases as it restores the performance to previous levels. Performing index maintenance on a regular schedule, however, can be very time consuming and frustrating for the database administrator.
ApexSQL Defrag offers easy to use scheduling and policies to enable worry-free index maintenance, more so when faced with small maintenance windows or even when encountered with environments which must be online at all times.
Online index rebuilds – what are they and how do they work
Online index rebuilds are crucial for environments which have to be online and available at all times, such as e-commerce websites. Online index rebuilds make it possible to perform index maintenance while maintaining the database without interruption. It essentially enables multiple users to update and query the data in the index while it is being rebuilt. This is opposed to offline index rebuilds, where the data definition language (DDL) operations performed offline acquire and hold exclusive locks on the underlying data and indexes associated with that data, which prevent any modification and query to the underlying data as long as the index operation is in progress.
The way online index rebuild works is by creating source and target structures. The existing index is considered as the source structure. During the rebuild, there are a couple of phases in the lifecycle of the rebuild both on the source and target structures, those being Preparation, Build and Final.
On the source structure, when starting the online index rebuild, the first phase, Preparation, a new index is created and set to write-only. That new index is considered a target structure.
After that, the source structure enters the Build phase, in which the data gets scanned, sorted, then merged and inserted to the target structure in bulk load operations. Any user operations made in that time, such as insert, update or delete operations are applied to both the source and target structures.
After the Build phase comes the Final phase, where the system metadata is updated to replace the source with the target structure. After that, source structure gets dropped if that is required.
On the target structure, there are also three phases, Preparation, Build and Final.
In the Preparation phase of the target structure is the process of creating the new index and setting it to write-only.
In the Build phase, the data gets inserted from the source structure, as well as any user modifications.
In the Final phase, index metadata gets updated and the index is set to have a read/write status. After that, any new queries use the new index
Some support exceptions exist. Clustered indexes containing large object (LOB) data types: image, ntext, text, must be rebuilt offline. Also, local temp tables indexes must be rebuilt offline. This restriction, though, does not apply on global temp tables.
In SQL Server 2017 online index rebuilds, which can be resumed after an interruption, such as unexpected failure, introduce a PAUSE command or database failover. See Alter Index on Microsoft Docs for more information on RESUMABLE parameters of an online index rebuild.
There also need to be some consideration for the disk space needed for such an operation, as it creates one more index for each index being rebuild and fills it with the same data. Even though online index rebuilds permit user update activity at the same time as the rebuild, the operation itself will take longer if the user update activity is very heavy. In most cases, online index rebuilds are slower than the equivalent offline index rebuild, regardless what the concurrent update activity on the index is.
Also, large-scale index rebuilds, whether they are performed offline or online, will generate large data loads that can, in the result, fill the transaction log quickly. In order to make sure that the index rebuild can be rolled back should there be a need for it, transaction log can’t be truncated until the rebuild is finished, however, it can be backed up during the rebuild, and it must have enough space to store the index rebuild transactions and any user activity transactions performed while the rebuild was in progress
While available for all environments, online index rebuilds are mostly recommended for environments which, despite the noticeable performance impact when performing a rebuild online versus offline, enable a high level of database availability. For environments where there are maintenance windows, or being offline periods are an option, for example in periods of lower traffic, offline index rebuilds are preferred since they are generally faster and less resource intensive compared to online rebuilds.
Setting custom policies
ApexSQL Defrag has a list of predefined policies, as well as the ability to create your own, customized policies for defragmentation jobs which you can then use to create index defragmentation jobs with ease just by selecting the indexes you need to maintain
First, you need to have ApexSQL Defrag installed and configured, which can be done by following this article.
To start, go to Policies tab on the main application window and click on Templates
After clicking on Templates, we are presented with a window displaying a couple of default templates, which cannot be edited nor deleted. They represent some of the most common practices in index maintenance. They are differentiated by the type of index rebuild, whether it’s offline index rebuild or online index rebuild.
Note: Online index rebuilds are supported starting from SQL Server 2005 only in Enterprise edition. From version SQL Server 2008 online index rebuilds are available in Enterprise, Developer and Evaluation editions
After pressing the Create button, we are presented with a Create policy template wizard
Under General, we can find the Name and Description of the policy.
Under Index fragmentation thresholds, we can select the type of rebuild between offline and online.
Following that, we have the sliders where we select on the percentage of fragmentation on which the index will be reorganized, percentage when the index will be rebuilt, and the lower threshold percentage below which the indexes won’t be rebuilt nor reorganized
After that, we can select the Fragmentation scan mode between Limited, Sampled and Detailed
A limited mode is the fastest as it scans the least number of pages. It only scans the pages above the leaf level. Sampled mode scans and returns a 1% sample of all the pages in the index. Detailed mode scans all pages of an index and returns the most accurate statistics. The modes get slower when moving from Limited towards Detailed as more work is performed in each subsequent mode
Under Index targets thresholds, we can enable or disable couple of options in our template. The first two options, Include indexes larger than: and Exclude indexes larger than: are pretty self-explanatory, giving users the option to target only indexes which are larger than a set number of index pages, and smaller than a set number of index pages. The third option, Include first percentage of indexes, selects the only preset number of indexes which fit in the set top percentage of the policy targets. The targets are sorted by fragmentation in a descending order
Under Resource thresholds, we can apply CPU load option, which checks if the load on the CPU is less than the specified and in case it is, then the job will run as normal. Page density ensures the job will run only if the page density for selected indexes is lower than specified. Memory usage works similar like CPU load, the job will run if there is more than specified RAM memory available. Hard disk usage ensures the job will run only if there is more than specified storage space available
After pressing OK, the policy template is created and can be seen in the list
Creating customized policies in ApexSQL Defrag
Now that we have created a policy template which suits our maintenance needs, we can proceed to create a defragmentation policy
Go to Policies tab and click on the small arrow below the Create button, hover over From template and in the following menu choose our previously created policy template
In the Policy wizard, we enter the name of the policy, Description is already filled with information from the policy template. Under Schedule, we select when to execute the policy. On Targets, click the three dots to the right and in the following window select the indexes you want maintenance done on
Similarly, in the Thresholds tab of the same wizard, we can see all the options are preset with the ones we have set up in the template we created earlier
After creating the policy, we can see it in the Policies activity tab
Another option when creating customized policies is the ability to save the policy we have been creating to a template from the Save as template button in the Policy wizard itself
We are presented with the familiar template creation wizard where we need to enter the policy template name and check other options listed in it and click on OK
After that, we can select that template at any time when creating new policies
  The post How to customize policies for automatically defragmenting SQL Server indexes appeared first on Solution center.
0 notes
file-formats-programming · 8 years ago
Text
Table Text Formatting & Setting PPT Slide Size with Respect to Content Scaling using .NET
What's New in this Release?
Aspose team is happy to share the announcement of Aspose.Slides for .NET 17.1.0. It has included support for some of much awaited new features in this releases along with resolution of outstanding issues. Earlier, if users needed to alter the text formatting for table rows or columns, users needed to traverse through every row or column respectively for setting the text formatting. Now, Aspose.Slides offers a new feature for setting the text format on table level , row level, columns level. User can also set the slide size by using it with different ways of content scaling. Please visit the documentation article, Setting the Slide Size with respect to Content Scaling for further details. Now, Users can also use the custom chart data label values from chart data workbook as well. This release has resolved the issues related to presentation saving which earlier resulted in missing or wrong fonts, missing headers, improper text, missing hyperlinks and wrong line styles for shapes in saved presentations. It has also addressed the presentation access and saving issues have been rectified for many presentation decks that resulted in exceptions including NullReferenceException, ArgumentException, NotImplementedException and PptxReadException in previous releases. It has also improved the presentation rendering support in this release and have resolved certain issues related to missing or improper text rendering, missing images, missing charts and its entities in generated PDF, HTML and slide thumbnails outputs.   This list of new, improved and bug fixes in this release are given below
Support for Value from Cells feature for chart data labels
Support for bulk setting text properties for whole table, row or column
Set and control text spellcheck language using Aspose.Slides
Add support for changing language of presentation and shape's text
Using locale for setting the language
Language property for textboxes
Changing slides orientation has no effect on contents
Changing font related properties in master slide not getting applied
Setting multi-level chart categories not working
Unexpected subscript effect on saving presentation
Exception on converting ppt to pptx or pptm
Pptx not properly converted to html
Pptx to pdf conversion giving OutOfMemoryError
Text are improperly rendered in generated PDF
EMF images are not properly rendered in generated pdf
Embedded fonts are not getting copied when cloning slide
Exception on generating thumbnails
Problem with content in result file after saving Ppt to Pptx
Cylinder drawing is changed after loading and saving a ppt
Meta files are improperly rendered in generated thumbnails
Character misplaced after converting to svg
Cell border not generated as double line
Icon missing after converting slide to svg
Text in pptx document not justified properly
Bullets changes while converting odp to pdf
Creating charts from sql server table
Slide orientation went wrong
Thumbnails output cropped
Shapes with FillType.Group missing in the generated thumbnail
Text is improperly rendered in generated thumbnail
Bullet space changed after saving ppt
Pptx changed after converting to pdf
Exception on saving presentation
Text is not being rendered when exporting slides as thumbnails
High memory consumption while converting pptx to pdf
Incorrect character positioning in HTML representation of the presentation document in Safari for iOS
Equations are improperly rendered in generated PDF and thumbnails
Chart title appears on pptx to html
Incorrect chart on generated pdf
Date changed to asterisk when saving presentation
DataPoints of scattered chart are not showing in the generated image file
Y Axis Labels are not correct in the generated image file
Images are not rendered in HTML to PPTX Import
Exception on Opening the PPTX file. Error unexpected font parsing exception
Ppt to Pptx conversion disturbs equations
Improper gradient fill export for geometry shapes
Improper DrBrush is used when exporting gradient filled text to PDF
Gradient brush is incorrectly formed when exporting gradient-filled text
PPTX to PDF: Text is missing in generated PDF file
Footer not Visible when setting using Metacharacters
Chart improperly rendered in generated PDF           
Protected view error message on generating PPT form Aspose.Slides
Improper thumbnail generated for PPT
Default font related properties are set when copying slide notes
Index out of range exception on accessing presentation
PowerPoint 2010 Error Message: PowerPoint has detected problem in file in generated PPT
ProtectedView message appears if multiple hyperlinks are added in generated presentation
Picture is missing in notes page on presentation save
Equations text overlap in the generated PDF
Mathematical equation are improperly rendered in exported PDF
Other most recent bug fixes are also included in this release
Newly added documentation pages and articles
Some new tips and articles have now been added into Aspose.Slides for Java documentation that may guide users briefly how to use Aspose.Slides for performing different tasks like the followings.
Setting the Slide Size with respect to Content Scaling
Setting the WorkBook Cell As Chart DataLabel
Overview: Aspose.Slides for .NET
Aspose.Slides is a .NET component to read, write and modify a PowerPoint document without using MS PowerPoint. PowerPoint versions from 97-2007 and all three PowerPoint formats: PPT, POT, PPS are also supported. Now users can create, access, copy, clone, edit and delete slides in their presentations. Other features include saving PowerPoint slides into PDF, adding & modifying audio & video frames, using shapes like rectangles or ellipses and saving presentations in SVG format, streams or images.
More about Aspose.Slides for .NET
Homepage of Aspose.Slides for .NET
Downlaod of Aspose.Slides for .NET
Online documentation of Aspose.Slides for .NET
0 notes
marcosplavsczyk · 8 years ago
Link
In the article, “What is a data dictionary and why would I want to build one?” a data dictionary was described and compared to other alternatives to documenting, auditing and versioning a database. The article also described the difference between a Dumb and a Smart data dictionary.
Now that we’ve whetted your appetite, this article will explain how to create a smart data dictionary using XML schema change exports from ApexSQL Diff.
The created Data dictionary can be later used for various things like querying to see the full history of object changes or to create aggregate exports showing database change statistics that will be covered in a separate article.
Requirements
SQL Server 2005 and above
ApexSQL Diff Professional edition (as this task requires the CLI). ApexSQL Diff is a 3rd party tool for comparing SQL Server schemas. Version 2016.02.0383 or higher is required
Integrated security for database connectivity. For SQL Server authentication the script can be modified to use encrypted credentials stored in the project file itself. Learn more about different options for handling database credentials here
A configuration file (config.xml), that provides the name of the SQL Server and the Database where the Data dictionary is located and also provides SQL Server and the Database name which changes we are tracking. The config file must be located in the same directory as the PowerShell script
Configuration
Setting up the baseline
The first step in the process of creating a Data dictionary is setting up the data sources. To create a “Smart” data dictionary, our dictionary much be “change aware” so that it only writes changes (new, updated and deleted objects). To do this we’ll need to establish (and periodically reset) the baseline and compare it to our actual database
To accomplish this, we’ll use ApexSQL schema snapshots, a proprietary and lightweight file that contains an entire database schema. Snapshots can be created directly from ApexSQL Diff and can be used by ApexSQL Diff to compare to a database to produce a difference export, which we’ll be using later in the article
We’ll create an initial snapshot and save it to the Root path (as specified in the Config file) using ApexSQL Diff. This will all be done by the PowerShell script, which will manage the baseline snapshots for us
This snapshot will serve as our initial baseline. Each time changes are discovered between the baseline snapshot and the database we are working on, we’ll replace our baseline snapshot with a newer version, so it can be used as a baseline on the next comparison to check for any changes
Day 1: Baseline snapshot = Database (no changes)
Day 2: Baseline snapshot = Database (no changes)
Day 3: Baseline snapshot < Database (changes)
Write a version of the database (changes only to the data dictionary)
Create a new baseline using the newer version of the database
Day 4: Baseline snapshot version 2 = Newer version of database (no changes)
etc
Creating the data dictionary repository
The next part is the creation of the repository, which will exist as a database table in SQL Server itself. This repository filled with the data from the XML export, will become our data dictionary
So, let’s create a database which will be used as a repository. We’ll decided to call it “DataDictionary”. Then, we’ll create the table, of the same name, for storing the data in the Data dictionary information. (The script for this table can be found in Appendix A)
The next step is creating a stored procedure which will be used to fill the data dictionary. This procedure will extract the data from the XML schema difference export in XML, create a temporary XML table with the data from the XML export, then that data will be extracted from the XML table to the DataDictionary table we created earlier.
The stored procedure FillDataDictionary is created where it’s defined that a temporary XML table will to be created, with all of the data from the XML export. Then all of the data from the temporary XML table is parsed and placed in the DataDictionary table. Once that is done, the temporary table is dropped. (The script for this procedure can be found in Appendix B)
Configuration settings
Now that our data dictionary infrastructure has been created, we’ll need to set the configuration settings to determine
The SQL Server and database name who’s changes we’re going to track
The SQL Server and database name of the Data Dictionary repository
We’ll create this as an XML file called config.xml and put it in the same directory as the PowerShell script. The PowerShell script will open that config file and parse it, to gather the necessary configuration information to successful run the Data dictionary upload job, each time it runs
Execution
Once the database repository, the DataDictionary table and FillDataDictionary is created, you have edited the configuration file as needed, you are ready to go. The PowerShell script we created, can simply be executed to create the first batch of records in our data dictionary.
The script can be scheduled to unattended, every night at 12:00 AM for example, to ensure your data dictionary is continuously updated
Download and unpack the .zip file
Edit the config.xml file to (See example in Appendix C – Configuration file)
Specify the name and the Server name of the Database that you will create a data dictionary for
Specify the name and the Server name of the DataDictionary repository
Run the enclosed SQL script [Create_DataDictionary_Schema.sql] to create the repository database, table and loader stored procedure in the SQL Server you specify
Run the PowerShell script to capture initial state of your database
Test
Next, make a change to an object in the database, add a new object etc.
Now run the PowerShell script again. You should see a record for each change when you query your Data dictionary database.
To query your data dictionary, use this SQL
SELECT * FROM [DataDictionary].[dbo].[DataDictionary]
Scheduling for unattended execution
Once everything is running well, schedule a job to run this job unattended.
In the next article in this series, we’ll demonstrate how to leverage your newly created data dictionary for auditing, version control and aggregate reporting on transactional data
Appendix A – the repository creation script
-- Create database [DataDictionary] IF (NOT EXISTS(SELECT * FROM sys.databases WHERE name='DataDictionary')) BEGIN CREATE DATABASE [DataDictionary] END GO USE [DataDictionary] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- Create table [dbo].[DataDictionary] IF NOT EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID (N'[dbo].[DataDictionary]')) BEGIN CREATE TABLE DataDictionary ( ID int primary key identity(1,1), ChangeDate [datetime], [Server] [varchar](128), [Database] [varchar](128), ObjectType [varchar](50) , Name [varchar](128) , Owner [varchar](128) , ObjectID [int], DiffCode [bit] , DiffType [char](2) , DiffANSI [varchar](5) , DiffAssembly [varchar](5) , DiffAssemblyClass [varchar](5) , DiffAssemblyMethod [varchar](5) , DiffBaseType [varchar](5) , DiffBody [varchar](5) , DiffBoundDefault [varchar](5) , DiffBoundDefaults [varchar](5) , DiffBoundRule [varchar](5) , DiffBoundRules [varchar](5) , DiffChangeTracking [varchar](5) , DiffCheckConstraints [varchar](5) , DiffCLRName [varchar](5) , DiffColumnOrder [varchar](5) , DiffColumns [varchar](5) , DiffDataspace [varchar](5) , DiffDefaultConstraints [varchar](5) , DiffDefaultSchema [varchar](5) , DiffDurability [varchar](5) , DiffExtendedProperties [varchar](5) , DiffFiles [varchar](5) , DiffForeignKeys [varchar](5) , DiffFulltextIndex [varchar](5) , DiffIdentities [varchar](5) , DiffIndexes [varchar](5) , DiffLockEscalation [varchar](5) , DiffManifestFile [varchar](5) , DiffMemoryOptimized [varchar](5) , DiffNullable [varchar](5) , DiffOwner [varchar](5) , DiffParameters [varchar](5) , DiffPermissions [varchar](5) , DiffPermissionSet [varchar](5) , DiffPrimaryKey [varchar](5) , DiffReturnType [varchar](5) , DiffScale [varchar](5) , DiffSize [varchar](5) , DiffStatistics [varchar](5) , DiffUnique [varchar](5) , DiffUserLogin [varchar](5) , DiffXMLColumnSet [varchar](5) , DiffXMLIndexes [varchar](5) , DDL [nvarchar] (max) ) END GO
Appendix B – the data-loader stored procedure script
-- Create stored procedure [dbo].[FillDataDictionary] CREATE PROCEDURE [dbo].[FillDataDictionary] @xmlLocation VARCHAR(150) AS BEGIN DECLARE @COMMAND NVARCHAR(MAX) SET @COMMAND = N'SELECT CONVERT(XML, BulkColumn) AS XMLData INTO ##XMLwithOpenXML FROM OPENROWSET(BULK ''' + @xmlLocation + ''', SINGLE_BLOB) AS x'; EXEC sp_executesql @COMMAND DECLARE @XML AS XML ,@hDoc AS INT ,@SQL NVARCHAR(MAX) SELECT @XML = XMLData FROM ##XMLwithOpenXML EXEC sp_xml_preparedocument @hDoc OUTPUT ,@XML DROP TABLE ##XMLwithOpenXML INSERT INTO DataDictionary SELECT GETDATE() AS ChangeDate ,[Server] ,[Database] ,[ObjectType] ,[Name] ,[Owner] ,[ObjectID] ,[DiffCode] ,[DiffType] ,[DiffANSI] ,[DiffAssembly] ,[DiffAssemblyClass] ,[DiffAssemblyMethod] ,[DiffBaseType] ,[DiffBody] ,[DiffBoundDefault] ,[DiffBoundDefaults] ,[DiffBoundRule] ,[DiffBoundRules] ,[DiffChangeTracking] ,[DiffCheckConstraints] ,[DiffCLRName] ,[DiffColumnOrder] ,[DiffColumns] ,[DiffDataspace] ,[DiffDefaultConstraints] ,[DiffDefaultSchema] ,[DiffDurability] ,[DiffExtendedProperties] ,[DiffFiles] ,[DiffForeignKeys] ,[DiffFulltextIndex] ,[DiffIdentities] ,[DiffIndexes] ,[DiffLockEscalation] ,[DiffManifestFile] ,[DiffMemoryOptimized] ,[DiffNullable] ,[DiffOwner] ,[DiffParameters] ,[DiffPermissions] ,[DiffPermissionSet] ,[DiffPrimaryKey] ,[DiffReturnType] ,[DiffScale] ,[DiffSize] ,[DiffStatistics] ,[DiffUnique] ,[DiffUserLogin] ,[DiffXMLColumnSet] ,[DiffXMLIndexes] ,[DDL] FROM OPENXML(@hDoc, 'root/*/*') WITH( ObjectType [varchar](50) '@mp:localname' ,[Server] [varchar](50) '../../Server1' ,[Database] [varchar](50) '../../Database1' ,NAME [varchar](50) 'Name' ,OWNER [varchar](50) 'Owner1' ,ObjectID [int] 'ObjectID1' ,DiffCode [bit] 'Diff_Code' ,DiffType [char](2) 'DiffType' ,DiffANSI [varchar](5) 'DiffANSI' ,DiffAssembly [varchar](5) 'DiffAssembly' ,DiffAssemblyClass [varchar](5) 'DiffAssemblyclass' ,DiffAssemblyMethod [varchar](5) 'DiffAssemblymethod' ,DiffBaseType [varchar](5) 'DiffBasetype' ,DiffBody [varchar](5) 'DiffBody' ,DiffBoundDefault [varchar](5) 'DiffBounddefault' ,DiffBoundDefaults [varchar](5) 'DiffBounddefaults' ,DiffBoundRule [varchar](5) 'DiffBoundrule' ,DiffBoundRules [varchar](5) 'DiffBoundrules' ,DiffChangeTracking [varchar](5) 'DiffChangetracking' ,DiffCheckConstraints [varchar](5) 'DiffCheckconstraints' ,DiffCLRName [varchar](5) 'DiffCLRname' ,DiffColumnOrder [varchar](5) 'DiffColumnorder' ,DiffColumns [varchar](5) 'DiffColumns' ,DiffDataspace [varchar](5) 'DiffDataspace' ,DiffDefaultConstraints [varchar](5) 'DiffDefaultconstraints' ,DiffDefaultSchema [varchar](5) 'DiffDefaultschema' ,DiffDurability [varchar](5) 'DiffDurability' ,DiffExtendedProperties [varchar](5) 'DiffExtendedproperties' ,DiffFiles [varchar](5) 'DiffFiles' ,DiffForeignKeys [varchar](5) 'DiffForeignkeys' ,DiffFulltextIndex [varchar](5) 'DiffFulltextindex' ,DiffIdentities [varchar](5) 'DiffIdentities' ,DiffIndexes [varchar](5) 'DiffIndexes' ,DiffLockEscalation [varchar](5) 'DiffLockescalation' ,DiffManifestFile [varchar](5) 'DiffManifestfile' ,DiffMemoryOptimized [varchar](5) 'DiffMemoryoptimized' ,DiffNullable [varchar](5) 'DiffNullable' ,DiffOwner [varchar](5) 'DiffOwner' ,DiffParameters [varchar](5) 'DiffParameters' ,DiffPermissions [varchar](5) 'DiffPermissions' ,DiffPermissionSet [varchar](5) 'DiffPermissionset' ,DiffPrimaryKey [varchar](5) 'DiffPrimarykey' ,DiffReturnType [varchar](5) 'DiffReturntype' ,DiffScale [varchar](5) 'DiffScale' ,DiffSize [varchar](5) 'DiffSize' ,DiffStatistics [varchar](5) 'DiffStatistics' ,DiffUnique [varchar](5) 'DiffUnique' ,DiffUserLogin [varchar](5) 'DiffUserlogin' ,DiffXMLColumnSet [varchar](5) 'DiffXMLcolumnset' ,DiffXMLIndexes [varchar](5) 'DiffXMLindexes' ,DDL [nvarchar](max) 'SourceDDL' ) EXEC sp_xml_removedocument @hDoc END
Appendix C – Configuration file
<config> <! —- Server name where the target database is placed --> <Server>(local)</Server> <! —-Target database name which changes will be tracked --> <Database>AdventureWorks2014</Database> <! —-Server name where the data dictionary repository will be placed --> <DataDictionaryServer>(local)</DataDictionaryServer> <!—-Name of the data dictionary repository (database) --> <DataDictionaryDatabaseName>DataDictionary</DataDictionaryDatabaseName> </config>
Appendix D – the PowerShell script
#find the Snapshot file which has the highest value for the "created date" parameter function FindSnapshotByDate($folder) { #find all files whose name ends with .axsnp $Files = Get-ChildItem -Path $folder -Filter "*.axsnp" if ($Files.Length -eq 0) { #if no such file is found, then that means that there isn't any snapshot previously created return $null } $mostRecentFile = $Files | Sort-Object -Property "CreationTime" -Descending | Select-Object -First 1 return $mostRecentFile.FullName } #check the existance of Exports, Logs or Snapshot folders, creates it if it is not created and returns the path function CheckAndCreateFolder($rootFolder, [switch]$Exports, [switch]$Baselines, [switch]$Logs) { $location = $rootFolder #set the location based on the used switch if($Exports -eq $true) { $location += "\Exports" } if($Baselines -eq $true) { $location += "\Snapshots" } if($Logs -eq $true) { $location += "\Logs" } #create the folder if it doesn't exist and return its path if(-not (Test-Path $location)) { mkdir $location -Force:$true -Confirm:$false | Out-Null } return $location } #insert schema difference records into the datadictionary table function InsertRecordsToDataDictionaryDatabase($dataDictionaryServer, $dataDictionaryDbName, $xmlExportFullPath) { $SqlConnection = New-Object System.Data.SqlClient.SqlConnection $SqlConnection.ConnectionString = "Server=$dataDictionaryServer;Initial catalog=$dataDictionaryDbName;Trusted_Connection=True;" try { $SqlConnection.Open() $SqlCommand = $SqlConnection.CreateCommand() $SqlCommand.CommandText = "EXEC [dbo].[FillDataDictionary] '$xmlExportFullPath'" $SqlCommand.ExecuteNonQuery() | out-null } catch { Write-Host "FillDataDictionary could not be executed`r`nException: $_" } } ################################################################################# #read and parse config.xml file from root folder [xml]$configFile = Get-Content config.xml $server = $configFile.config.Server $database = $configFile.config.Database $dataDictionaryServer = $configFile.config.DataDictionaryServer $dataDictionaryDbName = $configFile.config.DataDictionaryDatabaseName $rootFolder = $PSScriptRoot #check if ApexSQLDiff is installed $DiffInstallPath = Get-ItemPropertyValue -Path "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\ApexSQL Diff*" -Name "InstallLocation" if(-not $DiffInstallPath) { #ApexSQL Diff installation not found. Please install ApexSQL Diff to continue } $diffLocation = $DiffInstallPath + "ApexSQLDiff.com" $snapshotsLocation = CheckAndCreateFolder $rootFolder -Baselines $exportLocation = CheckAndCreateFolder $rootFolder -Exports $logLocation = CheckAndCreateFolder $rootFolder -Logs $today = (Get-Date -Format "MMddyyyy") $latestSnapshot = FindSnapshotByDate $snapshotsLocation $snapshotName = "SchemaSnapshot_$today.axsnp" $logName = "SnapshotLog_$today.txt" $xml = "SchemaDifferenceExport_$today.xml" $initialCompare = "/s1:""$server"" /d1:""$database"" /s2:""$server"" /d2:""$database"" /ot:x /xeo:e is /on:""$exportLocation\$xml"" /f /v" $compareSettingsSnapshot = "/s1:""$server"" /d1:""$database"" /sn2:""$latestSnapshot"" /out:""$logLocation\$logName"" /rece /f /v" $exportSettingsSnapshot = "/s1:""$server"" /d1:""$database"" /sn2:""$snapshotsLocation\$snapshotName"" /export /f /v" $diffExportXMLparams = "/s1:""$server"" /d1:""$database"" /sn2:""$latestSnapshot"" /ot:x /xeo:d s t is /on:""$exportLocation\$xml"" /f /v" #if no previous snapshot found, create snapshot for current state and skip the rest if($latestSnapshot -eq $null) { #put initial state of current database in datadictionary (Invoke-Expression ("& `"" + $diffLocation +"`" " +$initialCompare)) PutRecordsToDataDictionaryDatabase $dataDictionaryServer $dataDictionaryDbName $exportLocation\$xml #create snapshot of current database state (Invoke-Expression ("& `"" + $diffLocation +"`" " +$exportSettingsSnapshot)) Write-Host "Snapshot is not found in the '$snapshotsLocation' folder.`r`n`r`nInitial snapshot has been automatically created and named '$snapshotName'" #here, add the comparison against empty datasource continue } #compare the database with latest snapshot (Invoke-Expression ("& ""$diffLocation"" $compareSettingsSnapshot")) $returnCode = $LASTEXITCODE #differences detected if($returnCode -eq 0) { #Export differences into XML file (Invoke-Expression ("& ""$diffLocation"" $diffExportXMLparams")) #Add timestamp on each line of log file $tsOutput | ForEach-Object { ((Get-Date -format "MM/dd/yyyy hh:mm:ss") + ": $_") >> $file } PutRecordsToDataDictionaryDatabase $dataDictionaryServer $dataDictionaryDbName $exportLocation\$xml #create snapshot of current database state (Invoke-Expression ("& `"" + $diffLocation +"`" " +$exportSettingsSnapshot)) } #there are no differences or an error occurred else { #an error occurred if($returnCode -ne 102) { Write-Host "An error occurred during the application execution.`r`nError code: $returnCode" continue } }
References:
4 ways of handling database/login credentials during automated execution via the CLI
How to automate and schedule CLI execution with SQL Server Job
What is a SQL Server Data Dictionary and why would I want to create one?
The post How to build a “smart” SQL Server Data dictionary appeared first on Solution center.
0 notes