#Create Table Values Parameter in SQL server
Explore tagged Tumblr posts
promptlyspeedyandroid · 10 days ago
Text
Complete PHP Tutorial: Learn PHP from Scratch in 7 Days
Are you looking to learn backend web development and build dynamic websites with real functionality? You’re in the right place. Welcome to the Complete PHP Tutorial: Learn PHP from Scratch in 7 Days — a practical, beginner-friendly guide designed to help you master the fundamentals of PHP in just one week.
PHP, or Hypertext Preprocessor, is one of the most widely used server-side scripting languages on the web. It powers everything from small blogs to large-scale websites like Facebook and WordPress. Learning PHP opens up the door to back-end development, content management systems, and full-stack programming. Whether you're a complete beginner or have some experience with HTML/CSS, this tutorial is structured to help you learn PHP step by step with real-world examples.
Why Learn PHP?
Before diving into the tutorial, let’s understand why PHP is still relevant and worth learning in 2025:
Beginner-friendly: Easy syntax and wide support.
Open-source: Free to use with strong community support.
Cross-platform: Runs on Windows, macOS, Linux, and integrates with most servers.
Database integration: Works seamlessly with MySQL and other databases.
In-demand: Still heavily used in CMS platforms like WordPress, Joomla, and Drupal.
If you want to build contact forms, login systems, e-commerce platforms, or data-driven applications, PHP is a great place to start.
Day-by-Day Breakdown: Learn PHP from Scratch in 7 Days
Day 1: Introduction to PHP & Setup
Start by setting up your environment:
Install XAMPP or MAMP to create a local server.
Create your first .php file.
Learn how to embed PHP inside HTML.
Example:
<?php echo "Hello, PHP!"; ?>
What you’ll learn:
How PHP works on the server
Running PHP in your browser
Basic syntax and echo statement
Day 2: Variables, Data Types & Constants
Dive into PHP variables and data types:
$name = "John"; $age = 25; $is_student = true;
Key concepts:
Variable declaration and naming
Data types: String, Integer, Float, Boolean, Array
Constants and predefined variables ($_SERVER, $_GET, $_POST)
Day 3: Operators, Conditions & Control Flow
Learn how to make decisions in PHP:
if ($age > 18) { echo "You are an adult."; } else { echo "You are underage."; }
Topics covered:
Arithmetic, comparison, and logical operators
If-else, switch-case
Nesting conditions and best practices
Day 4: Loops and Arrays
Understand loops to perform repetitive tasks:
$fruits = ["Apple", "Banana", "Cherry"]; foreach ($fruits as $fruit) { echo $fruit. "<br>"; }
Learn about:
for, while, do...while, and foreach loops
Arrays: indexed, associative, and multidimensional
Array functions (count(), array_push(), etc.)
Day 5: Functions & Form Handling
Start writing reusable code and learn how to process user input from forms:
function greet($name) { return "Hello, $name!"; }
Skills you gain:
Defining and calling functions
Passing parameters and returning values
Handling HTML form data with $_POST and $_GET
Form validation and basic security tips
Day 6: Working with Files & Sessions
Build applications that remember users and work with files:
session_start(); $_SESSION["username"] = "admin";
Topics included:
File handling (fopen, fwrite, fread, etc.)
Reading and writing text files
Sessions and cookies
Login system basics using session variables
Day 7: PHP & MySQL – Database Connectivity
On the final day, you’ll connect PHP to a database and build a mini CRUD app:
$conn = new mysqli("localhost", "root", "", "mydatabase");
Learn how to:
Connect PHP to a MySQL database
Create and execute SQL queries
Insert, read, update, and delete (CRUD operations)
Display database data in HTML tables
Bonus Tips for Mastering PHP
Practice by building mini-projects (login form, guest book, blog)
Read official documentation at php.net
Use tools like phpMyAdmin to manage databases visually
Try MVC frameworks like Laravel or CodeIgniter once you're confident with core PHP
What You’ll Be Able to Build After This PHP Tutorial
After following this 7-day PHP tutorial, you’ll be able to:
Create dynamic web pages
Handle form submissions
Work with databases
Manage sessions and users
Understand the logic behind content management systems (CMS)
This gives you the foundation to become a full-stack developer, or even specialize in backend development using PHP and MySQL.
Final Thoughts
Learning PHP doesn’t have to be difficult or time-consuming. With the Complete PHP Tutorial: Learn PHP from Scratch in 7 Days, you’re taking a focused, structured path toward web development success. You’ll learn all the core concepts through clear explanations and hands-on examples that prepare you for real-world projects.
Whether you’re a student, freelancer, or aspiring developer, PHP remains a powerful and valuable skill to add to your web development toolkit.
So open up your code editor, start typing your first <?php ... ?> block, and begin your journey to building dynamic, powerful web applications — one day at a time.
Tumblr media
0 notes
ai-cyber · 3 months ago
Text
instagram
Understanding Your Code:
Your Python code performs a variety of tasks, including:
Quantum Circuit Simulation (Qiskit): Simulates a simple quantum circuit.
GitHub Repository Status Check: Checks if a GitHub repository is accessible.
DNS Lookup/Webpage Query Prediction: Predicts usage based on the time of day.
C Library Integration: Calls functions from a C library (Viable.so).
Octal Value Operations: Works with octal values and DNS severity levels.
Ternary Operator Usage: Demonstrates the use of ternary operators.
Cosmos Data Structure: Represents solstices, equinoxes, weeks, and days.
Machine Learning (Naive Bayes and KNN): Trains and evaluates machine learning models.
Data Visualization: Plots the results of machine learning predictions.
Integrating Your Code with PostgreSQL:
Here's how we can integrate your code with your PostgreSQL database:
Logging DNS Queries:
Modify your code to log DNS queries into the dns_query_logs table.
Whenever your AI software performs a DNS lookup, insert a new row into dns_query_logs with the query time, query type, domain name, and result.
This will provide a persistent record of your DNS activity.
Storing DNS Records:
If your AI software retrieves DNS records, store them in the dns_records table.
This will allow you to analyze and process DNS data over time.
You might need to parse the DNS response and extract the relevant information.
Storing Configuration Settings:
Move configuration settings from your code to the configurations table.
This will make it easier to manage and update settings without modifying your code.
For example, you could store API keys, DNS server addresses, and other parameters.
Storing Hierarchical Data:
If your AI software works with hierarchical data (e.g., DNS zones, network topologies), store it in the hierarchical_data table.
This will allow you to represent and query hierarchical relationships.
Storing Analysis Results:
Store the results of your AI analysis in the database.
For example, you could store:
Detected anomalies in DNS traffic
Security threats identified
Predictions made by your machine learning models
This will allow you to track and analyze your AI software's performance.
Connecting Your Code to PostgreSQL:
Use a Python database connector library (e.g., psycopg2 or asyncpg) to connect to your PostgreSQL database.
Implement functions to:
Insert data into your tables
Retrieve data from your tables
Update data in your tables
Delete data from your tables
Next Steps:
Install the psycopg2 library:
pip install psycopg2-binary
Modify your code to connect to your PostgreSQL database.
Use the psycopg2 library to establish a connection.
Create a cursor object to execute SQL queries.
Implement functions to insert data into the dns_query_logs table.
Modify your DNS query logic to insert a new row into dns_query_logs whenever a query is made.
Let's start by modifying your code to connect to your PostgreSQL database and insert data into the dns_query_logs table.
0 notes
dynamicscommunity101 · 10 months ago
Text
Mastering SSRS Reports in Dynamics 365
Tumblr media
Within Microsoft Dynamics 365, ssrs report in d365 is an effective tool for managing, creating, and distributing reports. Businesses can obtain in-depth insights, make data-driven choices, and expedite reporting procedures by utilizing SSRS reports. This tutorial explores the essential elements of SSRS report creation and management in Dynamics 365, providing helpful advice and best practices to maximize your reporting potential.
What does Dynamics 365's SSRS mean? The integration of SQL Server Reporting Services with the Dynamics 365 ecosystem to provide advanced reporting and analytics is referred to as SSRS in Dynamics 365. Financial statements, sales analysis, and operational indicators are just a few of the business-critical reports that can be produced with Dynamics 365's SSRS reports. The purpose of these reports is to extract data from Dynamics 365 and deliver it in an intelligent, well-organized manner.
Important Procedures for Creating and Managing Dynamics 365 SSRS Reports
Establish the environment for development
Install Required Tools: Make sure that Visual Studio with the SSRS reporting services extensions installed, or SQL Server Data Tools (SSDT) installed. These are the necessary tools for creating and distributing reports. Set Up Data Connections: Create connections to Dynamics 365 for data. In order to retrieve pertinent data from the Dynamics 365 system, you might need to set up data sources and datasets.
Create the Report
Make a New Project for Reports: To get started, create a new SSRS report project in SSDT or Visual Studio. In accordance with your reporting requirements, specify the report layout, including the tables, charts, and visual components. Describe the datasets and data sources: Link access views or data entities in Dynamics 365 in order to obtain the required data. Setup datasets with the data fields needed for your report included. Layout of a Design Report: To format the report, add graphic elements, and arrange the data fields, use the report designer. Make sure the design satisfies company needs and is understandable and easy to use.
Put Report Parameters Into Practice
Include Parameters: Include parameters so that consumers can alter and filter the report's data. Date ranges, classifications, and other pertinent report-related criteria are examples of parameters. Set Up Defaults: To simplify user interface and minimize human input, set default values for parameters.
Install and Verify the Report
Deploy the Report: After the report has been designed and configured, upload it to the Dynamics 365 system. This entails setting up the required parameters and uploading the report to the reporting server. Examine the report: Test the report thoroughly to make sure it performs as intended. Verify the performance, correctness, and usability of the data. Respond to any problems that surface while testing.
Oversee and Uphold Reports
Update Reports: Continually update reports to take into account modifications to data structures, business requirements, or user comments. For future reference, keep track of versions and record changes in writing. Performance Monitoring: Keep an eye on user comments and report performance at all times. For effective data retrieval and presentation, optimize report setups and queries. Best Practices in Dynamics 365 for SSRS Reports Create with the user experience in mind: Make reports that are easy to read and offer pertinent information without being too overwhelming for the user. Make sure the report is organized neatly and the The data is displayed in an understandable manner.
Verify the accuracy and timeliness of the data shown in the reports to ensure data accuracy. To ensure data integrity, reconcile report data with source systems on a regular basis.
Optimize Performance: To enhance report performance, make use of effective indexing and queries. When it comes to cutting down on report processing time, steer clear of huge datasets and intricate calculations.
Put Security Measures in Place: By setting up the right security settings and access controls, you can make sure that sensitive data in reports is safeguarded. Restrict report access to only those who are authorized.
Maintain complete documentation for every report, including its purpose, data sources, and any customizations. This will help with version control. To handle report updates and alterations, use version control.
Summary
In Dynamics 365, SSRS reports are essential for providing useful information and facilitating data-driven decision-making. Gaining proficiency in the creation, distribution, and administration of SSRS reports will improve your reporting skills and enable you to give stakeholders insightful data. You may produce dependable and efficient reports by following best practices, which include performance optimization, data correctness assurance, and user-friendly report design. Gaining a thorough understanding of Dynamics 365's SSRS reporting will enable you to fully utilize your data and improve business results.
0 notes
advancedexcelinstitute · 10 months ago
Text
Use of Power Query in Power BI
Tumblr media
Power Query in Power BI is a powerful tool used for data transformation and preparation before visualizing the data. It provides an intuitive interface to connect, combine, and refine data from various sources into a coherent, structured dataset ready for analysis. Excel Training in Mumbai often covers how to use Power Query to effectively prepare and transform data. Here's an overview of how Power Query is used in Power BI:
1. Connecting to Data Sources
Importing Data: Power Query can connect to various data sources like Excel files, databases (SQL Server, Oracle, etc.), online services (Azure, SharePoint, etc.), and even web pages.
Multiple Data Sources: You can combine data from multiple sources into a single dataset, which is especially useful when dealing with complex data architectures.
2. Data Transformation
Data Shaping: Power Query allows you to shape your data by removing unnecessary columns, renaming columns, filtering rows, and sorting data.
Data Cleansing: It provides tools to clean your data by handling missing values, removing duplicates, splitting and merging columns, and correcting data types.
Merging and Appending: You can merge (join) tables based on common columns or append (union) tables to create a unified dataset.
Conditional Columns: Power Query enables creating conditional columns based on specific logic, similar to using IF statements in Excel.
3. Advanced Data Manipulation
Grouping and Aggregation: You can group data by specific columns and aggregate data (e.g., summing, averaging) to create summary tables.
Pivoting and Unpivoting: Power Query allows pivoting rows to columns and vice versa, transforming your data into a more suitable structure for analysis.
Custom Columns: Using the M language (Power Query's formula language), you can create custom columns with complex calculations and logic.
4. Data Loading
Load to Data Model: Once the data is transformed, it can be loaded into the Power BI data model, where it can be used for creating reports and visualizations.
Direct Query vs. Import Mode: Power Query supports both Direct Query (where data is queried directly from the source) and Import Mode (where data is imported into Power BI for analysis).
5. Automation and Reusability
 Query Dependencies: Power Query automatically tracks dependencies between queries, ensuring that changes in one query reflect in others that depend on it. This feature is crucial for maintaining accurate and up-to-date data models, especially in complex projects.
Reusable Steps: All transformation steps are recorded and can be modified or reused across different queries, ensuring consistency and efficiency. This capability allows users to standardize their data preparation processes and streamline workflows, which is often highlighted in Advanced Excel Classes in Mumbai to help professionals optimize their data management tasks
6. Integration with Other Power BI Features
Parameters: You can create parameters in Power Query that allow dynamic filtering and customization of data sources and queries.
Templates: Power Query transformations can be saved as templates and reused across different Power BI reports or shared with others.
7. Data Profiling
Column Quality and Distribution: Power Query provides tools to profile your data, showing column quality, value distribution, and statistics to help identify data issues early.
Error Handling: It highlights errors and outliers, allowing you to manage and clean data before loading it into the data model.
8. Performance Considerations
Query Folding: Power Query attempts to push data transformations back to the data source (query folding) whenever possible, optimizing performance by reducing the amount of data loaded into Power BI.
Example Use Cases
Sales Data Preparation: Importing sales data from multiple regional Excel files, cleaning it, and consolidating it into a single dataset for analysis.
Web Scraping: Extracting data from a web page, transforming it into a structured format, and using it in a Power BI report.
Data Integration: Combining data from an SQL Server database and a SharePoint list, transforming it, and creating a unified data model for reporting.
Steps to Access Power Query in Power BI
Open Power BI Desktop.
Go to the "Home" tab.
Click on "Transform Data" to open the Power Query Editor.
Use the various tools and options available in the Power Query Editor to connect to data sources, transform data, and prepare it for analysis.
Power Query is essential for anyone looking to perform robust data transformation and preparation in Power BI. It ensures your data is clean, well-structured, and ready for analysis, enabling better insights and decision-making. Learning Power Query is a key part of Advanced Excel Training in Mumbai, as it equips individuals with the skills needed to handle data efficiently and create powerful data models.
For more information, contact us at:
Call: 8750676576, 871076576
Website:www.advancedexcel.net
0 notes
govindhtech · 11 months ago
Text
GCP Database Migration Service Boosts PostgreSQL migrations
Tumblr media
GCP database migration service
GCP Database Migration Service (DMS) simplifies data migration to Google  Cloud databases for new workloads. DMS offers continuous migrations from MySQL, PostgreSQL, and SQL Server to Cloud SQL and AlloyDB for PostgreSQL. DMS migrates Oracle workloads to Cloud SQL for PostgreSQL and AlloyDB to modernise them. DMS simplifies data migration to Google Cloud databases.
This blog post will discuss ways to speed up Cloud SQL migrations for PostgreSQL / AlloyDB workloads.
Large-scale database migration challenges
The main purpose of Database Migration Service is to move databases smoothly with little downtime. With huge production workloads, migration speed is crucial to the experience. Slower migration times can affect PostgreSQL databases like:
Long time for destination to catch up with source after replication.
Long-running copy operations pause vacuum, causing source transaction wraparound.
Increased WAL Logs size leads to increased source disc use.
Boost migrations
To speed migrations, Google can fine-tune some settings to avoid aforementioned concerns. The following options apply to Cloud SQL and AlloyDB destinations. Improve migration speeds. Adjust the following settings in various categories:
DMS parallels initial load and change data capture (CDC).
Configure source and target PostgreSQL parameters.
Improve machine and network settings
Examine these in detail.
Parallel initial load and CDC with DMS
Google’s new DMS functionality uses PostgreSQL multiple subscriptions to migrate data in parallel by setting up pglogical subscriptions between the source and destination databases. This feature migrates data in parallel streams during data load and CDC.
Database Migration Service’s UI and Cloud SQL APIs default to OPTIMAL, which balances performance and source database load. You can increase migration speed by selecting MAXIMUM, which delivers the maximum dump speeds.
Based on your setting,
DMS calculates the optimal number of subscriptions (the receiving side of pglogical replication) per database based on database and instance-size information.
To balance replication set sizes among subscriptions, tables are assigned to distinct replication sets based on size.
Individual subscription connections copy data in simultaneously, resulting in CDC.
In Google’s experience, MAXIMUM mode speeds migration multifold compared to MINIMAL / OPTIMAL mode.
The MAXIMUM setting delivers the fastest speeds, but if the source is already under load, it may slow application performance. So check source resource use before choosing this option.
Configure source and target PostgreSQL parameters.
CDC and initial load can be optimised with these database options. The suggestions have a range of values, which you must test and set based on your workload.
Target instance fine-tuning
These destination database configurations can be fine-tuned.
max_wal_size: Set this in range of 20GB-50GB
The system setting max_wal_size limits WAL growth during automatic checkpoints. Higher wal size reduces checkpoint frequency, improving migration resource allocation. The default max_wal_size can create DMS load checkpoints every few seconds. Google can set max_wal_size between 20GB and 50GB depending on machine tier to avoid this. Higher values improve migration speeds, especially beginning load. AlloyDB manages checkpoints automatically, therefore this argument is not needed. After migration, modify the value to fit production workload requirements.
pglogical.synchronous_commit : Set this to off 
As the name implies, pglogical.synchronous_commit can acknowledge commits before flushing WAL records to disc. WAL flush depends on wal_writer_delay parameters. This is an asynchronous commit, which speeds up CDC DML modifications but reduces durability. Last few asynchronous commits may be lost if PostgreSQL crashes.
wal_buffers : Set 32–64 MB in 4 vCPU machines, 64–128 MB in 8–16 vCPU machines
Wal buffers show the amount of shared memory utilised for unwritten WAL data. Initial load commit frequency should be reduced. Set it to 256MB for greater vCPU objectives. Smaller wal_buffers increase commit frequency, hence increasing them helps initial load.
maintenance_work_mem: Suggested value of 1GB / size of biggest index if possible 
PostgreSQL maintenance operations like VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY employ maintenance_work_mem. Databases execute these actions sequentially. Before CDC, DMS migrates initial load data and rebuilds destination indexes and constraints. Maintenance_work_mem optimises memory for constraint construction. Increase this value beyond 64 MB. Past studies with 1 GB yielded good results. If possible, this setting should be close to the destination’s greatest index to replicate. After migration, reset this parameter to the default value to avoid affecting application query processing.
max_parallel_maintenance_workers: Proportional to CPU count
Following data migration, DMS uses pg_restore to recreate secondary indexes on the destination. DMS chooses the best parallel configuration for –jobs depending on target machine configuration. Set max_parallel_maintenance_workers on the destination for parallel index creation to speed up CREATE INDEX calls. The default option is 2, although the destination instance’s CPU count and memory can increase it. After migration, reset this parameter to the default value to avoid affecting application query processing.
max_parallel_workers: Set proportional max_worker_processes
The max_parallel_workers flag increases the system’s parallel worker limit. The default value is 8. Setting this above max_worker_processes has no effect because parallel workers are taken from that pool. Maximum parallel workers should be equal to or more than maximum parallel maintenance workers.
autovacuum: Off
Turn off autovacuum in the destination until replication lag is low if there is a lot of data to catch up on during the CDC phase. To speed up a one-time manual hoover before promoting an instance, specify max_parallel_maintenance_workers=4 (set it to the  Cloud SQL instance’s vCPUs) and maintenance_work_mem=10GB or greater. Note that manual hoover uses maintenance_work_mem. Turn on autovacuum after migration.
Source instance configurations for fine tuning
Finally, for source instance fine tuning, consider these configurations:
Shared_buffers: Set to 60% of RAM 
The database server allocates shared memory buffers using the shared_buffers argument. Increase shared_buffers to 60% of the source PostgreSQL database‘s RAM to improve initial load performance and buffer SELECTs.
Adjust machine and network settings
Another factor in faster migrations is machine or network configuration. Larger destination and source configurations (RAM, CPU, Disc IO) speed migrations.
Here are some methods:
Consider a large machine tier for the destination instance when migrating with DMS. Before promoting the instance, degrade the machine to a lower tier after migration. This requires a machine restart. Since this is done before promoting the instance, source downtime is usually unaffected.
Network bandwidth is limited by vCPUs. The network egress cap on write throughput for each VM depends on its type. VM network egress throughput limits disc throughput to 0.48MBps per GB. Disc IOPS is 30/GB. Choose Cloud SQL instances with more vCPUs. Increase disc space for throughput and IOPS.
Google’s experiments show that private IP migrations are 20% faster than public IP migrations.
Size initial storage based on the migration workload’s throughput and IOPS, not just the source database size.
The number of vCPUs in the target Cloud SQL instance determines Index Rebuild parallel threads. (DMS creates secondary indexes and constraints after initial load but before CDC.)
Last ideas and limitations
DMS may not improve speed if the source has a huge table that holds most of the data in the database being migrated. The current parallelism is table-level due to pglogical constraints. Future updates will solve the inability to parallelise table data.
Do not activate automated backups during migration. DDLs on the source are not supported for replication, therefore avoid them.
Fine-tuning source and destination instance configurations, using optimal machine and network configurations, and monitoring workflow steps optimise DMS migrations. Faster DMS migrations are possible by following best practices and addressing potential issues.
Read more on govindhtech.com
0 notes
oditek · 1 year ago
Text
SnapLogic Tool | SnapLogic EDI | SnapLogic ETL | SnapLogic API
What is SnapLogic?
SnapLogic Integration Cloud is an innovative integration platform as a service (iPaaS) solution that offers a rapid, versatile, and contemporary approach to address real-time application and batch-oriented data integration needs. It strikes a harmonious balance between simplicity in design and robustness in platform capabilities, enabling users to quickly achieve value. The SnapLogic Designer, Manager, and Monitoring Dashboard are all part of a multi-tenant cloud service specifically designed for citizen integrators.
One of the key strengths of the SnapLogic Integration Cloud is its extensive range of pre-built connectors, known as Snaps. These intelligent connectors empower users to seamlessly connect various systems such as SaaS applications, analytics platforms, Big Data repositories, ERP systems, identity management solutions, social media platforms, online storage services, and technologies like SFTP, OAuth, and SOAP. In the rare instance where a specific Snap is not available, users have the flexibility to create custom Snaps using the Snap SDK, which is based on Java.
SnapLogic Integration Cloud is purpose-built for cloud environments, ensuring there are no legacy components that hinder its performance in the cloud. Data flows effortlessly between applications, databases, files, social networks, and big data sources leveraging the Snaplex, an execution network that is self-upgrading and elastically scalable.
What is SnapLogic Tool?
The SnapLogic Tool is a powerful software application provided by SnapLogic for streamlining integration processes on the SnapLogic Integration Cloud platform. It includes features such as SnapLogic EDI for seamless integration with EDI systems, SnapLogic ETL for efficient data extraction, transformation, and loading, SnapLogic API for creating and managing APIs, SnapLogic Support for comprehensive assistance, and SnapLogic API Management for effective API governance. The tool simplifies integration, reduces development time, and ensures secure communication between systems.
SnapLogic ETL
SnapLogic offers a powerful ETL (Extract, Transform, Load) system that enables users to efficiently load and manage bulk data in real-time, significantly reducing development time for data loading. The SnapLogic ETL system includes a pipeline automation feature designed to help enterprises load data faster and in a well-organized manner.
Through the automation pipeline, data can be seamlessly loaded from multiple sources such as SQL Server, Oracle, IBM DB2, and others, into the desired destination, such as Snowflake. This process is fully automated and eliminates the need for human intervention. The pipeline also incorporates automatic unit testing, ensuring data integrity and accuracy.
Using the SnapLogic ETL system, users can create tables in the destination automatically and perform a bulk load of data for the initial load. Subsequent loads can be done incrementally. Additionally, users have the ability to check all test logs, including schema testing for data types, constraints, and record comparison between the source and destination. These tests can be executed by passing a few required parameters to the pipeline.
The implementation of this ETL automation pipeline has yielded remarkable results, with a reduction of approximately 1400 hours of project development time. By leveraging the capabilities of SnapLogic ETL, organizations can achieve significant time savings and improved efficiency in their data loading processes.
SnapLogic EDI
Another SnapLogic Tool is SnapLogic EDI, which is a specialized component offered by SnapLogic, designed to facilitate seamless integration with Electronic Data Interchange (EDI) systems. This powerful tool provides organizations with the capability to automate and streamline the exchange of business documents with their trading partners.
With the SnapLogic EDI tool, users can leverage a user-friendly interface to configure EDI workflows and map data formats effortlessly. It offers a visual design environment where users can define mappings between their internal data structures and the specific EDI formats required by their trading partners.
The SnapLogic EDI tool enables the automation of the entire EDI process, from data transformation to document exchange. Users can define business rules and data transformations within the tool, ensuring that the data exchanged through EDI complies with the required formats and standards.
One of the key advantages of the SnapLogic EDI tool is its ability to handle various EDI standards and formats, such as ANSI X12, EDIFACT, and others. This flexibility allows organizations to seamlessly connect and exchange data with a wide range of trading partners, regardless of the specific EDI standards they use.
SnapLogic API
SnapLogic API Management is a powerful solution offered by SnapLogic that enables organizations to harness the potential of APIs for achieving digital business success. In today’s landscape, where data sprawls across hybrid and multi-cloud environments, APIs play a crucial role in connecting systems, enabling communication with partners, and delivering exceptional customer experiences.
With SnapLogic API Management, organizations gain a comprehensive set of features to effectively build, manage, and govern their APIs within a single platform. The low-code/no-code capabilities empower users to quickly and easily create APIs without the need for extensive coding knowledge. This accelerates the development process and allows organizations to rapidly expose their backend systems, as well as modern applications and services, to various environments.
Lifecycle API management is a key aspect of SnapLogic API Management. It encompasses a range of functionalities to secure, manage, version, scale, and govern APIs across the organization. Organizations can ensure that APIs are protected, control access and permissions, and enforce security policies. They can also manage the lifecycle of APIs, including versioning and scaling, to meet changing business needs.
SnapLogic API Management provides enhanced discoverability and consumption of APIs through a customizable Developer Portal. This portal serves as a centralized hub where developers and partners can explore and access available APIs. It improves collaboration, facilitates integration efforts, and promotes API reuse across the organization.
A comprehensive API Analytics Dashboard is another valuable feature of SnapLogic API Management. It allows organizations to track API performance, monitor usage patterns, and proactively identify any issues or bottlenecks. This data-driven insight enables organizations to optimize their APIs, ensure efficient operations, and deliver high-quality experiences to their API consumers.
Wrapping Up
The SnapLogic Tool offers a powerful and comprehensive solution for smooth and easy workflow integrations. With features such as SnapLogic EDI, SnapLogic ETL, SnapLogic API, and SnapLogic API Management, organizations can streamline their integration processes, automate data exchange with trading partners, perform efficient ETL operations, create and manage APIs, and ensure effective governance and scalability. With OdiTek providing the SnapLogic Tool, businesses can leverage its capabilities to achieve seamless connectivity, improved efficiency, and enhanced customer experiences through smooth workflow integrations.
Contact us today to more about our SnapLogic Services!
0 notes
godigiinfotech1 · 1 year ago
Text
The Foundation of Web Applications - An Complete Guide to Back-End Development
Front-end developer focus on the user interface & Back-end developer manage the server-side logic & database management that drive web applications. We will look at important technologies, suggested methods for aspiring full stack engineers, and the basic concepts of back-end development in this blog.
Understanding of Back-End Development
Building and maintaining the server, database, and application logic are all part of back-end development. It guarantees that data is appropriately processed, accessed, and saved, offering the capability required for front-end users to interact with .
Key Back-End Technologies
Server-Side Languages:
Node.js :
Purpose -the purpose is JavaScript runtime built on Chrome’s V8 engine, used for building fast & scalable server-side applications.
Key Concepts - Event-driven architecture, non-blocking I/O, Express framework.
Best Practices - Use middleware effectively, manage errors, optimize performance.
Python :
Purpose - Advanced interpreted language that is famous for being fast and understandable.
Key Concepts - ORM (Object-Relational Mapping), RESTful APIs, and the Flask and Django frameworks.
Best Practices - Create clean code, implement virtual environments, and follow to PEP 8 principles.
Ruby:
Purpose - Dynamic, object-oriented language designed for simplicity and productivity.
Key Concepts - Ruby on Rails framework, MVC architecture, Active Record.
Best Practices: Use gems judiciously, follow the Ruby style guide, test extensively.
Databases :
SQL Databases:
Examples - MySQL, PostgreSQL.
Key Concepts - Structured query language, relational tables, ACID properties.
Best Practices - Normalize databases, use indexes, backup regularly.
NoSQL Databases:
Examples - MongoDB, CouchDB.
Key Concepts - Document stores, key-value pairs, schema flexibility.
Best Practices - Optimize for read/write performance, use appropriate data models, ensure data integrity.
Back-End Frameworks
Express.js (Node.js):
Purpose - Minimalist web framework for Node.js.
Key Concepts - Middleware, routing, request/response handling.
Best Practices - Modularize routes, use environment variables, handle errors gracefully.
Django (Python):
Purpose - High level web framework that promotes efficient development & clean, pragmatic design.
Key Concepts - ORM, URL routing, template engine.
Best Practices - Follow the Django project structure, use Django’s built-in admin, secure your application.
Ruby on Rails:
Purpose - Server-side web application framework written in Ruby.
Key Concepts - Convention over configuration, Active Record, RESTful design.
Best Practices - Adhere to Rails conventions, use strong parameters, implement caching.
APIs and RESTful Services
Purpose: API - Application Programming Interfaces allows different software systems to communicate. REST API is common approach to create APIs.
Key Concepts - HTTP methods (GET, POST, PUT, DELETE), endpoints, JSON data format.
Best Practices - Design intuitive endpoints, use proper HTTP status codes, document your API.
Authentication and Security
Authentication Methods:
Session Based - Storing user session data on the server.
Token Based - Using tokens (example JWT) to authenticate requests.
OAuth - Third-party authentication (example logging in with Google).
Security Best Practices:
Data Encryption - Use SSL/TLS for secure communication.
Access Control - Implement proper user roles and permissions.
For online applications to be secure, trustworthy & effective, back-end development is important. You can guarantee the smooth and secure operation of your apps by become an expert in server-side languages, databases, frameworks, and best practices. Maintaining proficiency in a continuously developing sector needs continuous learning and practice.
Build Your Dream Project : Start Your Full Stack Path Today
0 notes
edcater · 1 year ago
Text
Exploring Tableau: Beginner-Friendly Tutorial
Are you ready to dive into the fascinating world of data visualization with Tableau? Whether you're a data enthusiast, a student, or a professional looking to enhance your analytical skills, Tableau offers a user-friendly platform to create stunning visualizations and gain valuable insights from your data. In this beginner-friendly tutorial, we'll walk through the basics of Tableau, step by step, so you can start harnessing its power with confidence.
1. Introduction to Tableau
Let's start with the basics. Tableau is a powerful data visualization tool that allows users to create interactive and shareable dashboards, reports, and charts. It's widely used across various industries for data analysis, business intelligence, and decision-making.
2. Getting Started with Tableau
To begin your Tableau journey, you'll need to download and install Tableau Desktop, the main application for creating visualizations. Once installed, launch Tableau Desktop and you're ready to go.
3. Connecting to Data Sources
Tableau allows you to connect to a wide range of data sources including Excel files, databases like SQL Server and MySQL, cloud platforms like Google BigQuery, and more. To import your data into Tableau, simply click on the "Connect" option and choose your desired data source.
4. Understanding Tableau Workspace
Tableau's workspace consists of various components such as data pane, shelves, cards, and toolbar. The data pane displays the data tables and fields from your connected data source, while shelves are used to build visualizations by dragging and dropping fields onto rows, columns, or marks cards.
5. Creating Basic Visualizations
Now it's time to create your first visualization in Tableau. Start by dragging a field from the data pane onto the rows or columns shelf to create a basic chart such as a bar chart or line graph. You can further customize your visualization by adding filters, sorting data, and formatting the appearance.
6. Building Interactive Dashboards
One of Tableau's standout features is its ability to create interactive dashboards that allow users to explore data dynamically. To build a dashboard, simply drag visualizations onto the dashboard canvas and arrange them as desired. You can then add interactivity by creating filters, parameters, and actions.
7. Using Calculated Fields and Parameters
Tableau offers advanced functionality through calculated fields and parameters, allowing users to perform calculations and create dynamic controls within their visualizations. Calculated fields enable you to create new fields based on existing data, while parameters allow users to input values dynamically and control aspects of the visualization.
8. Exploring Advanced Visualization Techniques
As you become more comfortable with Tableau, you can explore advanced visualization techniques to create more complex and insightful visualizations. This includes techniques such as dual-axis charts, trend lines, geographic maps, and advanced analytics using Tableau's built-in functions and features.
9. Sharing and Collaborating
Once you've created your visualizations and dashboards, it's time to share your insights with others. Tableau offers various options for sharing and collaborating including publishing to Tableau Server or Tableau Online, embedding visualizations in websites or presentations, and exporting as image or PDF files.
Conclusion
Congratulations! You've completed our beginner-friendly Tableau tutorial and are now equipped with the basic knowledge to start exploring and creating visualizations with Tableau. Remember, practice makes perfect, so don't hesitate to experiment with different features and techniques to unleash the full potential of Tableau for your data analysis needs. Happy visualizing!
0 notes
absalomcarlisle1 · 4 years ago
Text
Absalom Carlisle - DATA ANALYST
Absalom Carlisle is a customer-focused leader in operations, data analytics, project management and business development. Drives process improvements to contain costs, increase productivity and grow revenue through data analysis using-Python, SQL and Excel. Creates strategies and allocates resources through competitive analysis and business intelligence insights with visualizations using Tableau and Power-BI. Excellent presentation, analytical, communication and problem-solving-skills. Develops strong relationships with stakeholders to mitigate issues and to foster change. Nashville Software School will enhance and help me acquire new skills from a competitive program with unparalleled instructions. Working on individual & Group projects using real data set from local companies is invaluable. The agile remote-working environment-has/will continue to solidify my expertise as I prepare my journey to join Data Analytics career path.
Technical Skills
·   DATA ANALYSIS    SQL SERVER                                                                          POSTGRES SQL     EXCEL/PIVOT TABLES
·   PYTHON/JUPYTER NOTEBOOKS                                                                        TABLEAU/TABLEAU-PREP    POWER BI
·   SSRS/SSIS                                                                                                              GITBASH/GITHUB    KANBAN
DATA ANALYST EXPERIENCE
Querying Databases with SQL
Indexing and Query Tuning                                                                                    
Report Design W/Data Sets and Aggregates                                  
Sub-Reports-Parameters and Filters
Data Visualization W/Tableau and Power-BI
 Report Deployment                                                              
Metadata Repository                                                                
Data Warehousing-Delivery Process                                      
Data Warehouse Schemas
Star Schemas-Snowflakes Schemas                                  
PROFESIONAL EXPERIENCE
Quantrell Auto Group
Director of Operations | 2016- 2020
·         Fostered strong partnerships with business leaders, senior business managers, and business vendors.
·         Analyzed business vendor performances using Excel data with Tableau to create reports and dashboards for insights that helped implement vendor specific plans, garnering monthly savings of $25K.
·         Managed and worked with high profile Contractors and architecture firms that delivered 3 new $7M construction building projects for Subaru, Volvo and Cadillac on time and under budget.
·         Led energy savings initiative that updated HVAC systems, installed LED lighting though-out campus, introduced and managed remote controlled meters - reducing monthly costs from $38K to $18K and gaining $34K in energy rebate from the utility company- as a result, the company received Green Dealer Award recognition nationally.
·         Collected, tracked and organized data to evaluate current business and market trends using Tableau.
·         Conducted in-depth research of vehicle segments and presented to Sr. management recommendations to improve accuracy of residual values forecasts by 25%.
·         Identified inefficiencies in equipment values forecasts and recommended improved policies.
·         Manipulated residual values segment data and rankings using pivot tables, pivot charts.
·         Created routine and ad-hoc reports for internal and for external customer’s requests.
·         Provided project budgeting and cost estimation for proposal submission.
·         Established weekly short-term vehicle forecast based on historical data sets, enabling better anticipation capacity.
·         Selected by management to head the operational integration of Avaya Telecommunication system, Cisco Meraki Cloud network system and the Printer install project.
·         Scheduled and completed 14 Cisco Meraki inspections to 16 buildings, contributing 99% network up-time.
·         Following design plans, installed and configured 112 workstations and Cisco Meraki Switches, fulfilling 100% user needs.
Clayton Healthcare Services Founder | 2009 - 2015
·         Successfully managed home healthcare business from zero to six-figure annual revenues. Drove growth through strategic planning, budgeting, and business development.
·         Built a competent team from scratch as a startup company.
·         Built strategic marketing and business development plans.
·         Built and managed basic finance, bookkeeping, and accounting functions using excel.
·         Processed, audited and maintained daily, monthly payable-related activities, including data entry of payables and related processing, self-auditing of work product, reviews and processing of employee’s reimbursements, and policy/procedure compliance.
·         Increased market share through innovative marketing strategies and excellent customer service.
JP Morgan Chase
Portfolio Analyst 2006-2009
·         Researched potential equity, fixed income, and alternative investments for high net-worth individuals and institutional clients.
·         Analyzed quarterly performance data to identify trends in operations using Alteryx and Excel.
·         SME in providing recommendations for Equity Solutions programs to enable portfolio managers to buy securities at their own discretion.
·         Created ad-hoc reports to facilitate executive-level decision making
·         Maintained and monitored offered operational support for key performance indicators and trends dashboards
EDUCATION & TRAINING
Bachelor of Science in Managerial Economics                     2011                      Washington University
St. Louis, MO
Project Management Certification                                           2014                    St. Louis University
Microsoft BI Full Stack Certification
St. Louis, MO
Data Science/Analytics                                                            Jan 2021     Nashville Software School                                                                                      Nashville, TN
1 note · View note
micenhat · 4 years ago
Text
Learn SQL: SQL Triggers
In this article, we’ll focus on DML (data manipulation language) triggers and show how they function when we make changes in a single table.
What Are SQL Triggers?
In SQL Server, triggers are database objects, actually, a special kind of stored procedure, which “reacts” to certain actions we make in the database. The main idea behind triggers is that they always perform an action in case some event happens. If we’re talking about DML triggers, these changes shall be changes in our data. Let’s examine a few interesting situations:
In case you perform an insert in the call table, you want to update that related customer has 1 more call (in that case, we should have integer attribute in the customer table)
When you complete a call (update call.end_time attribute value) you want to increase the counter of calls performed by that employee during that day (again, we should have such attribute in the employee table)
When you try to delete an employee, you want to check if it has related calls. If so, you’ll prevent that delete and raise a custom exception
From examples, you can notice that DML triggers are actions related to the SQL commands defined in these triggers. Since they are similar to stored procedures, you can test values using the IF statement, etc. This provides a lot of flexibility.
The good reason to use DML SQL triggers is the case when you want to assure that a certain control shall be performed before or after the defined statement on the defined table. This could be the case when your code is all over the place, e.g. database is used by different applications, code is written directly in applications and you don’t have it well-documented.
Types of SQL Triggers
In SQL Server, we have 3 groups of triggers:
DML (data manipulation language) triggers – We’ve already mentioned them, and they react to DML commands. These are – INSERT, UPDATE, and DELETE
DDL (data definition language) triggers – As expected, triggers of this type shall react to DDL commands like – CREATE, ALTER, and DROP
Logon triggers – The name says it all. This type reacts to LOGON events
In this article, we’ll focus on DML triggers, because they are most commonly used. We’ll cover the remaining two trigger types in the upcoming articles of this series.
DML Triggers – Syntax
The simplified SQL syntax to define the trigger is as follows.
12345            CREATE TRIGGER [schema_name.]trigger_name            ON table_name            {FOR | AFTER | INSTEAD OF} {[INSERT] [,] [UPDATE] [,] [DELETE]}            AS            {sql_statements}            
Most of the syntax should be self-explanatory. The main idea is to define:
A set of {sql_statements} that shall be performed when the trigger is fired (defined by remaining parameters)
We must define when the trigger is fired. That is what the part {FOR | AFTER | INSTEAD OF} does. If our trigger is defined as FOR | AFTER | INSTEAD OF trigger than SQL statements in the trigger shall run after all actions that fired this trigger is launched successfully. The INSTEAD OF trigger shall perform controls and replace the original action with the action in the trigger, while the FOR | AFTER (they mean the same) trigger shall run additional commands after the original statement has completed
The part {[INSERT] [,] [UPDATE] [,] [DELETE]} denotes which command actually fires this trigger. We must specify at least one option, but we could use multiple if we need it
With this in mind, we can easily write triggers that will:
Check (before insert) if all parameters of the INSERT statement are OK, add some if needed, and perform the insert
After insert, perform additional tasks, like updating a value in another table
Before delete, check if there are related records
Update certain values (e.g. log file) after the delete is done
If you want to drop a trigger, you’ll use:
1            DROP TRIGGER [schema_name.]trigger_name;            
SQL INSERT Trigger – Example
First, we’ll create a simple SQL trigger that shall perform check before the INSERT statement.
123456789101112            DROP TRIGGER IF EXISTS t_country_insert;            GO            CREATE TRIGGER t_country_insert ON country INSTEAD OF INSERT            AS BEGIN                DECLARE @country_name CHAR(128);                DECLARE @country_name_eng CHAR(128);                DECLARE @country_code  CHAR(8);                SELECT @country_name = country_name, @country_name_eng = country_name_eng, @country_code = country_code FROM INSERTED;                IF @country_name IS NULL SET @country_name = @country_name_eng;                IF @country_name_eng IS NULL SET @country_name_eng = @country_name;                INSERT INTO country (country_name, country_name_eng, country_code) VALUES (@country_name, @country_name_eng, @country_code);            END;            
We can see our trigger in the Object Explorer, when we expand the data for the related table (country).
I want to emphasize a few things here:
The INSERT statement fires this query and is actually replaced (INSTEAD OF INSERT) with the statement in this trigger
We’ve defined a number of local variables to store values from the original insert record (INSERTED). This record is specific for triggers and it allows you to access this single record and its’ values
Note: The INSERTED record can be used in the insert and update SQL triggers.
With IF statements, we’ve tested values and SET values if they were not set before
At the end of the query, we performed the INSERT statement (the one replacing the original one that fired this trigger)
Let’s now run an INSERT INTO command and see what happens in the database. We’ll run the following statements:
123            SELECT * FROM country;            INSERT INTO country (country_name_eng, country_code) VALUES ('United Kingdom', 'UK');            SELECT * FROM country;            
The result is in the picture below.
You can easily notice that the row with id = 10, had been inserted. We haven’t specified the country_name, but the trigger did its’ job and filled that value with country_name_eng.
Note: If the trigger is defined on a certain table, for a certain action, it shall always run when this action is performed.
SQL DELETE Trigger – Example
Now let’s create a trigger that shall fire upon the DELETE statement on the country table.
12345678910111213            DROP TRIGGER IF EXISTS t_country_delete;            GO            CREATE TRIGGER t_country_delete ON country INSTEAD OF DELETE            AS BEGIN                DECLARE @id INT;                DECLARE @count INT;                SELECT @id = id FROM DELETED;                SELECT @count = COUNT(*) FROM city WHERE country_id = @id;                IF @count = 0                    DELETE FROM country WHERE id = @id;                ELSE                    THROW 51000, 'can not delete - country is referenced in other tables', 1;            END;            
For this trigger, it’s worth to emphasize the following:
Once again, we perform the action before (instead of) actual executing (INSTEAD OF DELETE)
We’ve used record DELETED. This record can be used in the triggers related to the DELETE statement
Note: The DELETED record can be used in delete and update SQL triggers.
We’ve used the IF statement to determine if the row should or shouldn’t be deleted. If it should, we’ve performed the DELETE statement, and if shouldn’t, we’re thrown and exception
Running the below statement went without an error because the country with id = 6 had no related records.
1            DELETE FROM country WHERE id = 6;            
If we run this statement we’ll see a custom error message, as shown in the picture below.
1            DELETE FROM country WHERE id = 1;            
Such a message is not only descriptive, but allows us to treat this error nicely and show a more meaningful message to the end-user.
SQL UPDATE Trigger
I will leave this one to you, as a practice. So try to write down the UPDATE trigger. The important thing you should know is that in the update trigger you can use both – INSERTED (after update) and DELETED (before update) records. In almost all cases, you’ll need to use both of them.
When to Use SQL Triggers?
Triggers share a lot in common with stored procedures. Still, compared to stored procedures they are limited in what you can do. Therefore, I prefer to have one stored procedure for insert/update/delete and make all checks and additional actions there.
Still, that is not always the option. If you inherited a system or you simply don’t want to put all the logic in the stored procedures, then triggers could a solution for many problems you might have.
1 note · View note
sagar-jaybhay · 5 years ago
Text
Cursor In RDBMS By Sagar Jaybhay 2020
New Post has been published on https://is.gd/fMZkWz
Cursor In RDBMS By Sagar Jaybhay 2020
Tumblr media
In this article we will understand cursor in rdbms in our case we show example on SQL Server By Sagar Jaybhay. Also we will understand Merge statement in SQL Server and rerunnable SQL scripts and How to create a stored procedure with an optional parameter?
Cursors In RDBMS
In a relational database management system takes into consideration then it would process the data in sets inefficient manner.
But when you have a need to process the data row by row basis then the cursor is the choice. The cursor is very bad at performance and it should be avoided and also you can replace the cursor with join.
Different Types of Cursors In RDBMS
Their are four types of cursors in rdbms which are listed below
Forward only
Static
Keyset
Dynamic
The cursor is loop through each record one by one so that’s why it’s performance is not good.
declare @empid int declare @deptid int declare @fullname varchar(200) declare empcurose cursor for select EmpID,full_name,DepartmentID from Employee open empcurose fetch next from empcurose into @empid,@fullname,@deptid while(@@FETCH_STATUS=0) begin print 'EmpID '+cast(@empid as varchar(10))+ ' Name '+cast(@fullname as varchar(100)) + ' deptid '+cast(@deptid as varchar(100)) fetch next from empcurose into @empid,@fullname,@deptid end close empcurose deallocate empcurose
Tumblr media
deallocate empcurose
This line is used to deallocate all resources which are allocated for that cursor.
What is rerunnable SQL scripts?
A re-runnable SQL script is a script that runs multiple times on the machine will not throw any kind of error.
For example, if you use create table statement to create a table then use if not exist in create a statement so it will not throw an error.
How to create a stored procedure with an optional parameter?
create procedure searchemployee @name varchar(10)=null, @deptid int=null, @gender varchar(10)=null as begin if(@name is not null) print 'i am in name '+cast(@name as varchar(20)) select * from tblEmp where [name]=@name; return; if(@deptid is not null) print 'i am in deptid '+cast(@deptid as varchar(20)) select * from tblEmp where deptid=@deptid; return; if(@gender is not null) print 'i am in gender '+cast(@gender as varchar(20)) select * from tblEmp where geneder=@gender; return; print 'i m here '+cast(@gender as varchar(20))+' '+cast(@deptid as varchar(20)) +' '+cast(@name as varchar(20)) select * from tblEmp end execute searchemployee @deptid=2
Simply pass default values to stored procedure variables.
Merge statement In SQL server
Merge statement is introduced in SQL server 2008 it allows to insert, update, deletes in one statement. It means there is no need to use multiple statements for insert update and delete.
In this, if you want to use merge statement you need to 2 tables
Source table– it contains the changes that need to apply to the target table.
Target table– this is the table that requires changes insert, update, delete.
Merge statement joins the target table to source table by using a common column in both tables based on how you match up we perform insert, update and delete.
Transaction Link: https://www.codementor.io/@sagarjaybhay18091988/transaction-in-sql-server-155l4qr7f4
2 notes · View notes
blogsarmistha-blog · 8 years ago
Text
C# - Using Table Valued Parameter
C# – Using Table Valued Parameter
In this article, we will learn:
What is Table Valued Parameter?
How to pass Table Valued Parameter from C#?
Advantages of using Table Valued Parameter?
  What is Table Valued Parameter?
Table Valued Parameters are used to pass multiple rows of data from a .net/client application to SQL Server without multiple round trips. we can pass multiple rows of a table to a stored procedure.  
How to create…
View On WordPress
0 notes
paintbrushe · 5 years ago
Text
Stored Procedures with SQL
Welcome to presume technologies, I am venker. This is part 18 of sequel server in this session, we'll understand what a stored procedure is. A simple stored procedure example creating a stored procedure with parameters altering a stored procedure, viewing the text of a stored procedure and finally, we will see how to drop a stored procedure. A stored procedure is a group of transaxial statements. If you ever have a situation where you have to write the same query over and over again, you can save that specific query as a stored procedure and call it just by its name. Let'S understand what we mean by this with an example. Now I have this table called TBL employee, which has got you know the ID name, gender and department ID columns. 
Let'S say I want name and gender of an employee, so we type in select name agenda from TBL employee. So, every time I want the name and gender of an employee, you know I have to write this query. Okay, instead of that, what we can actually do is wrap this query inside a stored procedure and call that stored procedure rather than you having to write. This query again and again: so how do we create a stored procedure to create a stored procedure? We use, create procedure, command so create procedure, and then you have to give this procedure an name. Okay, so SP, let us say, get employees okay, since this procedure is getting us, you know the employee name and gender, I'm giving it get employees and look at this in the name. I have this letters. Sp. A common naming convention for store procedures is that we usually prefix that, with small letter S and small letter, P, okay, indicating that you know just by looking at this name, you can tell okay. 
This is a stored procedure. Alright so create procedure procedure name and then you will use as begin and and okay. So the definition of your stored procedure goes between this begin and end okay. So this is the body of your stored procedure. Okay. Now, when I execute this command, what's going to happen is a stored procedure with this name gets created in this database, which is nothing but sample that we have selected here. Okay. Now, if you want to look at the stored procedure that you have just created, you know you want to make sure whether if this procedure is actually created or not, okay go into that database which is sample, and then you have this folder called program ability expand That and you should see a folder called stored procedures. 
If you expand that you know we don't have it currently listed there, just refresh that, and you should see that stored procedure which we have just created, which is sp, get employees okay, so anytime, you want the name and gender of an employee. Instead of writing. This query again, what you can actually do is execute the stored procedure. Okay, so if you want to execute, you just need the name of the procedure. So what happens when I execute the stored procedure? Okay, to execute the stored procedure, you just highlight that and click execute and you get the name and gender. You don't have to write that query any more. Now you might be wondering it's a very simple query. Why don't I write that rather than having to create this procedure and then invoke it now, this procedure may be simple in reality, the procedures will be long. 
You know there are very no stored procedures with over three thousands of lines, for example, okay, and not only that there are several other benefits of using stored procedures from security to network, reducing network traffic, etc. We will be talking about the advantages of stored procedures in a very great detail in a later session. Ok, so we use create procedure or create proc statement to create sp, I mean you can either say create procedure or you can just say, create proc for shortcut. Ok to create a stored procedure, we will talk about the naming convention of the stored procedures in just a bit okay and to execute the stored procedure. We have seen how to execute that stored procedure. 
Tumblr media
You know you just copy that the name of the stored procedure and you click this execute button and what happens the sequence statement within that short procedure gets executed and it returns the name and gender columns all right. So that's one way to execute it or you can use the exact keyword and then click this or you can use a full, execute keyword and then again, plus f5, or you can also graphically execute the stored procedure. Just right-click on the stored procedure and select execute stored procedure and the moment you do that it shows this vendor. This procedure doesn't have any parameters. Otherwise you will have to supply the values for the parameters. In just a bit. We will see how to create a stored procedure that takes parameters now, when I click OK, it executes that stored procedure. 
Look at that alright, so those are the different ways to execute stored procedure. Now let us look at a simple example of how to create a stored procedure with parameters. Okay, so let's go back to that table TBL employees all right now. What I want to do is I: I want to create a stored procedure which takes two parameters. Maybe gender and the department ID okay, for example, if I pass gender as male and Department IDs one tier stored procedure, it should give me employees only within that gender and within that department. Okay, so your store procedure needs to have these parameters. Okay. So, let's see how to do that so as usual to create a stored procedure, we use create procedure command so create procedure and give your procedure a meaningful name, so SP get employees by gender and department okay. So I want the employees by gender and Department. Now look at this now this procedure. 
Okay, the user who invokes your stored procedure, is going to pass the stored procedure, the gender and the department ID so for them to be able to pass the values for gender and department ID. They should be parameters just like how functions have parameters in c-sharp or any other programming. Language stood procedures can also have parameters. Okay, so one is the gender parameter and if you look at gender, it's text here, so the data type is going to be n, where care of maybe 20 and department ID Department ID is going to be integers or Department ID integer as begin, and so the Definition of your stored procedure goes in between these lines. 
Okay, so what do we want from the table? We want the name, and maybe just the gender and also the department ID from which table we wanted from TBL employee table, but we don't want all names genders and department IDs. We want to filter them with what the user is going to pass in is going to pass in the gender and department ID. So we are going to say, the gender column here should be equal to whatever the user is passing in at gender and along the same lines and department ID let's bring this to. Another line is equal to whatever the user is going to pass in okay. So these parameters are like placeholders when users execute your stored procedure, they're going to pass in values for this gender and department ID which will be replaced at execution time. Okay. So let's create the store procedure so to create that select the entire stored procedure. Click execute button command completed successfully. Now, if you refresh the stored procedures folder, you should see SP get employees, Genda and department, okay, now to execute the stored procedure. I just need the name of the stored procedure and look at this. This stored procedure now is expecting gender and department ID parameters. Now look at this. If I don't pass the parameters and if I did try to execute that stored procedure, see highlight that and then plus execute. What'S going to happen, this procedure or function, SP get employees by gender and department, expects parameter, add gender which was not supplied, and that makes sense it's expecting a gender parameter which is not supplied. So we need to pass in the gender parameter since gender is of type and we're care I have to use single quotes, so I want the male employees within you know: department ID 1, so department ID is 1. So these are the two parameters that this stored procedure expects and we need to pass them. So when I press f5 now look at that, I only get male employees within that. You know department ID 1, okay. On the other hand, if I want all the male employees and department ID do a 2, I can do so all right now, when you have parameters like this, you know what you're doing is you're just passing in the parameters. So so so this male value will be taken into at gender parameter where, as this number 1 is passed into department, ID parameter. Okay, now what happens? If I put it the other way on, I am passing one first, okay, so what's going to happen, it will take this one into gender and one is an integer, but gender is of type and where cab and this one will be converted into n we're care. Implicitly, no problem, but it comes to the second parameter male. It tries to take this into Department ID parameter and if you look at the data type of department ID parameter, it is integer okay, so it tries to convert this string into integer and it fails - and it throws an exception - look at this. If I try to execute this now, I get an exception saying that error converting data type where care to integer, so it is trying to convert this mail. You know string of type and where care into integer and we get that error. Okay. So when you have multiple parameters that the stored procedure is expecting and if you're passing just values the values order, you know the order in which you pass them is important. Okay, the first parameter will be used. I mean the value here. The first argument will be used with the first parameter and the second argument will be used with the second parameter. Okay, that's why the order is important, but if you use the parameter names like this, let's say I want to pass one two at Department ID. I can specify the name of the parameter like so and similarly I can specify the name of the parameter for gender. 
So when I execute this now, I will have no issues because you are specifying the name of the parameter. Okay, so sequence of a knows. Okay, this one is meant, you know to be the value for Department, ID parameter and mail is the value for gender parameter. It'S only when you don't specify the names of the parameter. The order of the the order in which you pass the parameter is parameters is important. Alright and okay. So we have seen how to create a simple stored procedure and how to create a procedure with parameters as well, and we have seen how to execute them as well. Okay, now, once you have the stored procedures, let's say I have created two procedures until now. Sp get employees and, as we get employees by gender and Department. 
Now, if I want to view the text of these two procedures, what are the different ways that are available? One way is to simply right: click on that stored procedure, script, stored procedure as create two new query - editor window - this you know, generates the contents of that stored procedure. Look at this. This is the stored procedure. Definition that we have created create procedure procedure named. As begin and and then our query, this is one way to look at the definition of a stored procedure and the other way is to use a system stored procedure. You know these stored procedures that we have created here are user-defined stored procedures. 
These are not system, stored procedures, now, sequel server, you know, has some system stored procedures defined okay and we use it for certain tasks. For example, I want to find the text of a stored procedure. How do I do that? I can use a system stored procedure called SP underscore health text. Okay, so look at this. This is the name of the system store procedure. Sp underscore help text, okay, SP help text and then, if I pass in the name of the stored procedure, there SP get employees and then, when I select them together and execute this look at this, I get the text of my stored procedure. You can then copy that paste it here and see. How does the implementation of the stored procedure looks like okay, so to view the definition of a stored procedure, you can either right-click on that script. Stored procedure as create two new query, editor window or you can use the system stored procedure, SP underscore health text, followed by the name of the stored procedure, which gives you the text of the stored procedure. Okay, now in this slide, if you look at this, you know whenever we name user-defined stored procedure, microsoft recommends not to use SP underscore. 
You know prefix for user-defined stored procedures, because the reason is system stored procedures has that prefix. Okay. Now, if you happen to use SP underscore prefix for your user-defined stored procedures, there are two problems number one. There will be an ambiguity between user-defined stored procedures and system defined stored procedures, just by looking at the name. We cannot tell you know. Is this a user define, stored procedure or system defines stored procedure? Okay and another problem is, with future releases of you, know new sequence of a version. There may be name conflicts, you know if you create, let's say SP underscore ket date. Just you know, stored procedure and in the future release there is a system stored procedure, which is you know similarly named SP underscore get date. You know it's going to conflict with the user stored procedure, okay. So, to avoid problems like this, it's always better not to prefix user-defined, stored procedures with SP underscore prefix, alright. So to change the stored procedure. 
Now, once we have created a stored procedure, for example, I have the stored procedure, SP get employees after I have created the stored procedure. Let'S say I want to change its implementation in some way. How do I do that? Let'S say at the moment when I execute this SP get employee stored procedure. I am NOT getting the names sorted, you know I mean the names are basically not sorted. I want them to be sorted, so how do I do that? I will use the order by Clause so order by name okay, so I am changing the implementation of this tour procedure now, if I execute this create procedure once again, look at this, we get an error stating that there is already an object named SP get employees. Why we already have that and we are trying to create that again with the same name. So obviously we get that error. Our intention here is to change the definition of that stored procedure, not to create another stored procedure. So if you want to change the definition of the stored procedure, then you say alter procedure and I press f5. 
The stored procedure gets changed now. If we execute that we should have the name sorted okay, so we use all the procedures statement to change the definition of a stored procedure and to delete the stored procedure. We use drop procedure procedure name just like you know. If you want to drop a table, you will use drop table, table name. Okay, so, similarly to drop a procedure, you will say: drop, drop procedure and procedure. Name, for example, I want to drop or delete SP get employees. You know I just pass it there. I press f5 and then, if i refresh the stored procedures, folder it's gone now it's deleted or what you can do: alternately right-click on that and select delete okay. Now it's also possible to encrypt the text of the stored procedure and it's very simple to do. For example, I have the stored procedure now as we get employees by gender and Department look at this now. 
This is not encrypted at the moment. So when I use SP underscore health text and when I press f5, I am able to get the text of that stored procedure. So that's how the stored procedure is implemented. Now, if I want to encrypt the contents, the text of the stored procedure, I can do that. How do I do that? All you have to do is to use this switch, this option with encryption, okay, and we want to alter that. So I will say alter now when I press f5 look at this command completed successfully and now the moment i refresh this folder look at this. We get a little lock symbol there indicating that this stored procedure is now encrypted. 
Okay, now, if somebody tries to you, know, get the text of the encrypted stored procedure, we get a message saying that the text for the object is encrypted and - and we cannot retrieve the text of that - ok and you get the same kind of message when you Kind of use script stored procedure as create two new query: editor window. Okay, we get this error box, you know. If you look at here. It says that text is encrypted, so once a stored procedure is encrypted. You cannot view the text of that stored procedure, but, however, if you want to delete the stored procedure, you can go ahead and delete it and I'll just right click and select delete it gets deleted, but you cannot view the contents of his stored procedure that is Encrypted all right in the next session, we will see how to create an invocation procedure with output parameters. In this session, we have seen how to create a stored procedure with input parameters in the next session. We'Ll talk about creating stored procedures with output parameters. On this slide, you can find resources for asp.net in c-sharp interview questions. That'S it for today. Thank you for listening. Have a great day,
1 note · View note
supersecure-blog · 6 years ago
Text
Something awesome - web research
In order to better understand web exploitation as a concept, I need to first gain a better understanding of how networks are structured, and how information is sent over the internet.
Tumblr media
You can read the notes I’ve compiled below:
How does the Internet work?
Modern life would be very different without computer networks. Computer networks are generally made up of multiple computers that are all connected together to share data and resources. The computer network that we all know is The Internet, which specifically connects computers that use the Internet Protocol or ‘IP’. 
This is what a basic computer network looks like:
Tumblr media
In our diagram, we have two things labelled “end system”, where one is the client and one is the server. These are all called ‘nodes’. The way that these nodes are connected, are through the lines made through the ISP (Internet Service Provider) and the Router. You can imagine the router as a traffic signaller. This router has only one job - it makes sure that a message sent from a given computer arrives at the right destination computer. 
Website Basics:
Information on the Internet is divided into different areas by websites. Websites are referred to by a ‘domain name’ (like google.com, facebook.com), and each web page is referred to by its URL or Uniform Resource Locator.  A website is a collection of web pages - so a website would be like a house and each webpage would be a room inside the house. 
A URL can be broken down into different sections. Some of these sections are essential, and some others are only optional. Let’s go through each one, and discuss what each section does using the following example URL:
http://www.example.com:80/path/to/myfile.html?key1=value1&key2=value2
Protocol:
This is the http:// part. A protocol is basically a set method for sending data around a computer network. Usually for websites it is the HTTP protocol or its secured version, HTTPS.
Domain Name
Something that you should be familiar with, this domain name is a way for humans to easily remember websites that they want to visit, rather than remembering an IP address.
Port
It indicates the technical "gate" used to access the resources on the web server. It is usually omitted if the web server uses the standard ports of the HTTP protocol (80 for HTTP and 443 for HTTPS) to grant access to its resources. Otherwise it is mandatory.
Path to File
/path/to/myfile.html is the path to the resource on the Web server. In the early days of the Web, a path like this represented a physical file location on the Web server. 
Parameters
?key1=value1&key2=value2 are extra parameters provided to the Web server. Those parameters are a list of key/value pairs separated with the & symbol. 
The Web server can use those parameters to do extra stuff before returning the resource. 
Each Web server has its own rules regarding parameters, and the only reliable way to know if a specific Web server is handling parameters is by asking the Web server owner.
Have a full read here
Different parts of a website and how to mess with it:
The building blocks of websites are HTML, CSS and Javascript which are all different programming languages with their own set of rules that you have to learn. If we think of a website like a fancy birthday cake then:
HTML is the base of the cake - it’s the main body and content of the website
CSS is the icing and decorations on top of the cake - it makes the cake look pretty and distinguishes the cake from other similar cakes
Javascript are the candles and sparklers - in terms of a website, javascript lets you make dynamic and interactive web pages
Like we said before, HTML is the base of your cake. HTML describes the structure of a Web page and consists of a series of elements which are represented by things called tags. HTML elements basically tell the browser how to display the content
HTML:
Tags look something like this:
<tagname>content goes here...</tagname>
There are some basic tags:
 <!DOCTYPE html> declaration defines this document to be HTML5
<html> element is the root element of an HTML page
 <head> element contains meta information about the document
<title> element specifies a title for the document
<body> element contains the visible page content
<h1> element defines a large heading
<p> element defines a paragraph
You can find the full HTML breakdown here
CSS:
For the sake of web-exploitation, you don’t need to know much about CSS. Here is a basic tutorial for those who want to learn how to make their websites look pretty!
Javascript:
One of the reasons why Javascript is used because it allows us to add interactivity between the user and the website. Javascript allows the user to interact with the website and have the website respond. 
By right clicking on a website on Google Chrome or Firefox you can select the option “Inspect” to see the code that the website is running on your computer. It allows you to see the HTML and CSS that is running on the website and it will also let you see the Javascript scripts running on your computer. The best part is, that you can edit the HTML directly and see it affect the website, so it lets you modify the website as you desire. You can also select “Inspect Element” to see the code that is running in a specific part of a website. 
What is HTTP?
It provides a standardised way for computers to communicate with each other over the internet. HTTP is a communication protocol, that is used to deliver data (HTML files, image files, query results, etc.) over the internet. HTTP dictates how data is sent between clients (you) and servers. 
GET and POST requests:
GET is used to request data from a specified resource.
GET is one of the most common HTTP methods.
POST is used to send data to a server to create/update a resource.
Full link: https://www.w3schools.com/tags/ref_httpmethods.asp 
Cookies:
HTTP cookies, also called web cookies or browser cookies are basically small bits of data that servers send to a user’s web browser. The browser can store it, and may also send the cookie back when it next requests information from the same server. Normally cookies are used to tell if two requests came from the same browser. For example, cookies  can help users stay logged-in to websites. Cookies have three main purposes:
Session management - logins, shopping carts, game scores and any other information that the server should remember about the user
Personalisation - user preferences, themes and other settings
Tracking - recording and analysing user behaviour
 How to perform a basic SQL injection:
SQL is a language that is used to basically fetch information from databases in websites. These databases can contain information like usernames and passwords for accounts for that website. If the code that is written isn’t secured, we can perform what’s called an SQL injection to gain access to data that we normally wouldn’t have access to.
<?php
$username = $_GET['username'];
$result = mysql_query("SELECT * FROM users WHERE username='$username'");
?>
If we look at the ‘$username’, this variable is where the username for a log in attempt would be stored. Normally the username would be something like, ‘user123’, but a malicious user might submit a different kind of data. For example, consider if the input was '?
The application would crash because the resulting SQL query is incorrect.
SELECT * FROM users WHERE username='''
Note the extra red quote at the end. Knowing that a single quote will cause an error,  we can expand a little more on SQL Injection.
What if our input was ' OR 1=1?
SELECT * FROM users WHERE username='' OR 1=1
1 is indeed equal to 1, which equates to true in SQL. If we reinterpret this the SQL statement is really saying
SELECT * FROM users WHERE username='' OR true
This will return every row in the table because each row that exists must be true. Using this, we can easily gain access to information that we aren’t supposed to!
3 notes · View notes
faithtitta · 3 years ago
Text
File spy metadata
Tumblr media
#FILE SPY METADATA HOW TO#
#FILE SPY METADATA REGISTRATION#
#FILE SPY METADATA VERIFICATION#
#FILE SPY METADATA SOFTWARE#
For databricks, we can use public library to achieve this. This is the core function for complex transformation. Maintain the source and destination information. Like the simple solution mentioned before. Create a table to record the order of each activities. Currently, I still think about and collect information about it. I think the metadata repository is the key.
#FILE SPY METADATA HOW TO#
So they only problem comes into how to build a metadata designer and repository.
Logging Repository – Azure log analytics.
If we look into this architecture, there are already some similar tech in Azure: Figure 3: the concept from the article “build a metadata-driven etl platform by extending microsoft sql server integration services” Through it is very old, but the content interests me. To get the more advanced solution, I found a article from Microsoft in 2008. Figure 2: the table structure for the simple solution Without some Transformation(biz logic or validation), it is not a standard process. But this simple solution can only solve Extract and Loading. In this solution, they retrieve the data source and destination combined with some parameters in the copy activity through a Lookup-Todo activity, then using foreach to execute stage data. Then I checked the document library for datafactory, there is one simple solution:įigure 1: the key activities for simple EL solution So, I was wondering if any solution to fix them once in all.
#FILE SPY METADATA SOFTWARE#
Like software development, the data pipeline development also face the same problems, e.g, duplicate activities, too many pipelines, hard coding reducing flexibility, etc. Datafactory provides more integrated solution while databricks gives more flexible one.
#FILE SPY METADATA VERIFICATION#
Once you have verified that the metadata works in the Test Environment, you can reply back in the SSO/Shibboleth Request form to say you have completed your verification and to move to Production.Azure provides datafactory and azure databricks for handling with ELT pipeline on a scalable environment.
#FILE SPY METADATA REGISTRATION#
To Have your Metadata installed in TestĬomplete the SSO/Shibboleth Service Registration Request. Publishing your MetadataĪfter you have your metadata file created, you may want to publish it to the Entity ID URL that you chose at the beginning of this process. See Add Additional Servers To Metadata for details. If you have more than one virtual (or physical) host sharing this entityID, you'll need to either enable request signing or add endpoints for the other hostnames to the metadata. University of Minnesota, Department of Long Nomenclature NOTE: If your organization contains reserved XML characters such as ampersand (&), greater than (>), or less than ( Usually this is right before the second to the last line. To customize the metadata XML file, add the following information after the section. Customizing the Metadata File by Adding Contact Information Please go back over the installation guides. If downloading the metadata fails, the SP and/or web server is not yet properly configured. To download the generated metadata using a browser, type in the URL in the location bar, and choose File -> Save as. To download the generated metadata using curl, use > metadata.xml Download with a browser To download the generated metadata from apache using wget, use wget -O metadata.xml Download with curl The Shibboleth SP uses these values when generating the endpoint URLs in the metadata. Be sure that you use the protocol (http or https) and server name that browsers will access. To get a copy of the file, you can use wget, curl, or a browser. The SP auto-generated metadata file will not work as-is. When creating your metadata file, it's best to start with the SP-generated Metadata and then customize it with your settings. Downloading the Metadata Template for your Server You should also have your application server Apache or IIS configured for Shibboleth. What you Should have Finished So Farīefore you continue with this page, you should have your shibboleth2.xml file created and configured. If you have not, please see the Choosing your Shibboleth Entity ID topic. If you've gotten this far, you have probably already chosen an Entity ID.
Tumblr media
1 note · View note
greyshyper · 3 years ago
Text
Sql server option recompile
Tumblr media
#SQL SERVER OPTION RECOMPILE SERIAL#
#SQL SERVER OPTION RECOMPILE CODE#
Use of a temporary table within an SP also causes recompilation of that statement. Since we have auto-update statistics off, we have less recompilation of SP's to begin with. This plan reuse system works as long as objects are qualified with it owner and DatabaseName (for e.g. If all four tables have changed by about 20% since last statistics update, then the entire SP is recompiled. If say, you are accessing four tables in that SP and roughly 20% of the data for one table has been found to have changed since last statistics update, then that statement is recompiled. Recompilation happens only when about 20% of the data in the tables being called from within the SP is found to have changed since the last time statistics was updated for those tables and its indexes. So usage of different input parameters doesn't cause a recompilation. New input parameters in the current execution replace the previous input parameters from a previous execution plan in the execution context handle which is part of the overall execution plan.
#SQL SERVER OPTION RECOMPILE SERIAL#
The new plan is discarded imeediately after execution of the statement.Īssuming both of these options are not being used, an execution of an SP prompts a search for pre-existing plans (one serial plan and one parallel plan) in memory (plan cache). Any pre- existing plan even if it is exactly the same as the new plan, is not used. When used with a TSQL statement whether inside an SP or adhoc, Option 2 above creates a new execution plan for that particular statement. There is no caching of the execution plan for future reuse. Once the SP is executed, the plan is discarded immediately. Any existing plan is never reused even if the new plan is exactly the same as any pre-existing plan for that SP. It can not be used at a individual statement level.
#SQL SERVER OPTION RECOMPILE CODE#
When used in the code of a particular Stored procedure, Option 1compiles that SP everytime it is executed by any user. We should use RECOMPILE option only when the cost of generating a new execution plan is much less then the performance improvement which we got by using RECOMPILE option.WITH RECOMPILE This is because of the WITH RECOMPILE option, here each execution of stored procedure generates a new execution plan. Here you see the better execution plan and great improvement in Statistics IO. Now execute this stored procedure as: set statistics IO on Now again creating that stored procedure with RECOMPILE option. Here when we execute stored procedure again it uses the same execution plan with clustered index which is stored in procedure cache, while we know that if it uses non clustered index to retrieve the data here then performance will be fast. Now executing the same procedure with different parameter value: set statistics IO on The output of this execution generates below mention statistics and Execution plan: Select address,name from xtdetails where execute this stored procedure as: set statistics IO on Now create stored procedure as shown below: create procedure as varchar(50)) Set into xtdetails table xtdetails contains 10000 rows, where only 10 rows having name = asheesh and address=Moradabad. Now, I am inserting the data into this table: declare as int Ĭreate clustered index IX_xtdetails_id on xtdetails(id)Ĭreate Nonclustered index IX_xtdetails_address on xtdetails(address) In this case if we reuse the same plan for different values of parameters then performance may degrade.įor Example, create a table xtdetails and create indexes on them and insert some data as shown below: CREATE TABLE. But sometimes plans generation depends on parameter values of stored procedures. If plan found in cache then it reuse that plan that means we save our CPU cycles to generate a new plan. If we again execute the same procedure then before creating a new execution plan sql server search that plan in procedure cache. When we execute stored procedure then sql server create an execution plan for that procedure and stored that plan in procedure cache. Here i am focusing on why we use WITH RECOMPILE option. Some time, we also use WITH RECOMPILE option in stored procedures. We use stored procedures in sql server to get the benefit of reusability. Today here, I am explaining the Use of Recompile Clause in SQL Server Stored Procedures.
Tumblr media
0 notes