#SQL Server decommissioning
Explore tagged Tumblr posts
Text
A DBA's Checklist for Decommissioning a SQL Server
As a DBA, one of the more stressful tasks that occasionally lands on your plate is having to decommission a SQL Server. Maybe the hardware is being retired, or databases are being consolidated onto fewer servers. Whatever the reason, it’s a delicate process that requires careful planning and execution to avoid data loss or business disruption. I still remember the first time I had to…

View On WordPress
0 notes
Text
Check The Azure VMware Solution’s Most Recent Features

AVS Azure VMware Solution
With Azure VMware Rapid Migration Plan, you can save a ton of money while moving or expanding VMware environments to Azure quickly and seamlessly. You can also access 200+ Azure services.
Get a quick migration for workloads on VMware.
Select a migration solution that combines the greatest features of Azure and VMware on fully managed infrastructure for ease of use.
Extend or migrate to the cloud more quickly without the need for re-platforming, reworking, or re-hosting.
Make use of your current abilities and workflow. NSX, vSAN, HCX, and VMware vSphere are all part of the Azure VMware Solution.
Customers of Windows and SQL Server can receive unrivaled cost savings with free Extended Security Updates.
Take care of a range of use cases, such as capex challenges, cyberthreats, license issues, datacenter exits, and capacity requirements.
Benefits
Examine switching to the Azure VMware Solution
Relocation according to your terms
Either transfer cloud-ready workloads to Azure infrastructure as a service (IaaS) or migrate everything exactly as is to Azure VMware Solution.
Effectiveness
By switching to Azure VMware Solution, you can avoid overprovisioning, hardware updates, and decommissioning infrastructure expenditures.
Regularity
Boost IT efficiency by making use of the VMware resources and expertise that already exist.
Value addition
Give IT personnel more time to work on value-adding projects rather than maintaining on-premises software and datacenters.
Dependable
With VMware technology that is completely managed and maintained by Microsoft, you can achieve business continuity, reduced downtime, and fewer disruptions.
Creativity
With access to native Azure services and tools and a highly productive cloud platform, move more quickly.
It gives me great pleasure to present some of the most recent changes Azure has made to the Azure VMware Solution.
Currently, 33 areas provide Azure VMware Solution: More than any other cloud provider, AVS is currently available in 33 Azure regions. Since its introduction four years ago, we have been striving to support customers worldwide through geographic expansion. India Central, UAE North, Italy North, and Switzerland North were the most recent additions. See which region is closest to you by visiting the Azure products by region webpage.
The DoD SRG Impact Level 4 Provisional Authorization (PA) in Azure Government has now authorized the addition of Azure VMware Solution as a service: Azure Government in Arizona and Virginia currently offers AVS.
Increased compatibility with VMware Cloud Foundation (VCF): Customers of NetApp and VMware by Broadcom may now use NetApp ONTAP software for all storage needs, including consolidated and standard architectures, to streamline their VCF hybrid cloud platforms. In order to give NetApp storage systems running VMware workloads symmetric active-active data replication capabilities, the most recent version of ONTAP Tools for VMware (OTV) will offer SnapMirror active sync. By removing data protection from their virtualized compute and enhancing data availability, SnapMirror active sync enables users to work more productively. Study up on it.
New features for Azure VMware Solution: Spot Eco by NetApp with AVS reserved instances may now be used by clients to maximize the value of their deployments when they are expanding or relocating their vSphere virtual machines (VMs). Compute expenses can be greatly decreased by offloading data storage to Azure NetApp Files and managing AVS reserved instances using Spot Eco. Find out more about Azure NetApp Files.
Use JetStream with Azure VMware Solution to Improve Disaster Recovery and Ransomware Protection: Azure’s customers require comprehensive choices to protect their essential workloads without sacrificing application performance. Disaster Recovery (DR) and ransomware protection are major concerns for enterprises today. AVS provides cutting-edge disaster recovery (DR) solutions with near-zero Recovery Point Objective (RPO) and instant Recovery Time Objective (RTO) through partnerships with top technology firms like JetStream. By continuously replicating data, the JetStream DR and Ransomware solution delivers Continuous Data Protection (CDP).
Using affordable and high-performance storage choices like Azure Blob Storage, Azure NetApp Files (ANF), and ESAN-based solutions, it uses heuristic algorithms to detect data tampering through VMware-certified VAIO APIs. Compared to other products on the market that guard against ransomware by taking irreversible photos, Azure’s strategy is distinct. JetStream and Microsoft have collaborated to create a special feature that rehydrates virtual machines (VMs) and their associated data from object storage, enabling them to be deployed to AVS nodes that are provisioned on-demand, either with or without a pilot light cluster. In the case of a disaster or ransomware attack, this guarantees a quick, affordable recovery with little downtime.
The VMware Rapid Migration Plan is a comprehensive suite of licensing perks and programs that Azure just introduced. It will safeguard your price and help you save money when you migrate to Azure VMware Solution. If your needs change, you can shift the value to other types of compute by using Reserved Instances to lock in pricing for one, three, or five years. With Azure Migrate and Modernize services, you may minimize migrating costs. You can also receive special Azure credits for purchasing Azure VMware Solutions. Additional savings on SQL Server and Windows Server licenses might be available to you.
Read more on govindhtech.com
#AzureVMwareSolution#MostRecentFeatures#VMwareRapidMigrationPlan#SQLServer#AzureNetApp#cyberthreats#VMware#Microsoft#cloudprovider#DisasterRecovery#dr#ransomwareattack#azure#sql#technology#technews#news#govindhtech
0 notes
Text
How to Decommission a Database in SQL Server
madesimplemssql.blogspot.com/2023/06/Decomm…
Join our MS SQL Telegram Group to get the latest updates:
t.me/+pEXpwt0qlH40Y…
1 note
·
View note
Text
What is Cloud Data Management?
The rise of multi-cloud, data-first architecture and the broad portfolio of advanced data-driven applications that have arrived as a result require cloud data management systems to collect, manage, govern and build pipelines for enterprise data. Cloud data management architectures span private, multi-cloud and hybrid cloud environments connecting to data sources not just from transaction systems, but from file servers, the Internet or multi-cloud repositories.
The scope of cloud data management includes enterprise data lake, enterprise archiving, enterprise content services, and consumer data privacy solutions. These solutions manage the utility, risk and compliance challenges of storing large amounts of data.
Cloud data platforms
Cloud data platforms are the centrepiece of cloud data management programs and provide uniform data collection and data storage at the lowest cost. Archives, data lakes, and content services enable cloud migration projects to connect, ingest, and manage any type of data from any source. For instance, cloud data platforms collect legacy and real-time data from mainframes, ERP, CRM, file stores, relational and non-relational databases, and even SaaS environments like Salesforce or Workday.
Enterprise Archiving
Studies have shown that data is accessed less frequently as it ages. Current data such as online data is accessed most frequently, but after two years, most enterprise data is hardly ever accessed. As data growth accelerates the load on production infrastructure grows, and the challenge to maintain application performance increases.
Application portfolios should be screened regularly for legacy applications that are no longer in use and those applications should be retired or decommissioned. In addition historical data from production databases should be archived to improve performance, optimize infrastructure and reduce overall costs. Information Lifecycle Management (ILM) should be used to establish data governance and compliance controls.
Enterprise archiving supports all enterprise data including databases, streaming data, file servers and email. Using ILM, enterprise archiving moves less frequently accessed data from production systems to nearline repositories. The archive data remains highly accessible and is stored in low cost buckets. Large organizations operating silos of file servers across departments and divisions use enterprise archiving to consolidate these silos into a unified and compliant cloud repository.
Enterprise Data Lake
Data-driven enterprises leverage vast and complex networks of data and services, and enterprise data lakes deliver the connections necessary to move data from any source to any target location. Enterprise data lakes handle very large volumes of data and scale horizontally using commodity cloud infrastructure to deliver data pipeline and data preparation services for downstream applications such as SQL data warehouse, artificial intelligence (AI) and machine learning (ML).
Data pipelines are a series of data flows where the output of one element is the input of the next one, and so on. Data lakes serve as the collection and access points in a data pipeline and are responsible for data organization and access control.
Data preparation makes data-fit-for-use with improved data quality. Data preparation services include data profiling, data cleansing, data enrichment and data transformation and data modeling. As an open source and industry standard solution, enterprise data lakes safely and securely collect and store large amounts of data for cloud migration, and provide enterprise grade services to explore, manage, govern, prepare and provide access control to the data.
Enterprise Content Services (ECS)
Corporate file shares are overflowing with files and long ago abandoned data. Enterprise Content Services collect and store historical enterprise data that would otherwise be spread out across various islands of storage, on personal devices, file shares, Google Drive, Dropbox, or personal OneDrives. Organizations planning cloud data migration to tackle content sprawl should consider ECS for secure and compliant file storage at the lowest cost. Cloud data migration with ECS consolidates enterprise data onto a single platform and unifies silos of file servers in innovative ways to become more efficient and reduce costs.
Consumer Data Privacy
Consumer data privacy regulations are proliferating with nearly 100 countries now adopting regulations. The California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) are perhaps the best known laws, but new regulations are on the rise everywhere as security breaches, cyberattacks and unauthorized releases of personal information continue to grow unabated. These new regulations mandate strict controls over the handing of personally identifiable information (PII), yet variations across geographies make legal compliance a complex requirement.
Information Lifecycle Management (ILM) manages data throughout its lifecycle and establishes a system of controls and business rules including data retention policies and legal holds. Security and privacy tools like data classification, data masking and sensitive data discovery help data administrators achieve compliance with data governance policies such as NIST 800-53, PCI, HIPAA, and GDPR. Consumer data privacy and data governance are not only essential for legal compliance, they improve data quality as well.
What’s The Urgency?
Exponential data growth is a known fact, however, the implications are only being felt by enterprises in the recent couple of years. On one end, more and more data is required to support data-driven applications and analytics. On the other end, data growth results in operational inefficiencies, technical debts and increased compliance risks. Data growth is a double-edged sword if left unmanaged and delivers great value by enabling enterprises to more effectively manage their data.
1 note
·
View note
Text
How Mr. Manasranjan Murlidhar Rana Helped Union Bank Switzerland as a Certified Hadoop Administrator
Mr. Manasranjan Murlidhar Rana is a certified Hadoop Administrator and an IT professional with 10 years of experience. During his entire career, he has contributed a lot to Hadoop administration for different organizations, including the famous Union Bank of Switzerland.
Mr. Rana’s Knowledge in Hadoop Architecture and its Components
Mr. Manasranjan Murlidhar Rana has vast knowledge and understanding of various aspects related to Hadoop Architecture and its different components. These are MapReduce, YARN, HDFS, HBase, Pig, Flume, Hive, and Zookeeper. He even has the experience to build and maintain multiple clusters in Hadoop, like the production and development of diverse sizes and configurations.

His contribution is observed in the establishment of rack topology to deal with big Hadoop clusters. In this blog post, we will discuss in detail about the massive contribution of Manasranjan Murlidhar Rana as a Hadoop Administrator to deal with various operations of the Union Bank of Switzerland.
Role of Mr. Rana in Union Bank of Switzerland
Right from the year 2016 to until now, Mr. Manasranjan Murlidhar Rana played the role of a Hadoop Administrator with 10 other members for his client named Union Bank of Switzerland. During about 4 years, he worked a lot to enhance the process of data management for his client UBS.
1. Works for the Set up of Hadoop Cluster
Manasranjan Murlidhar Rana and his entire team were involved in the set up of the Hadoop Cluster in UBS right from the beginning to the end procedure. In this way, the complete team works hard to install, configure, and monitor the complete Hadoop Cluster effectively. Here, the Hadoop cluster refers to a computation cluster designed to store and analyze unstructured data in a well-distributed computational environment.
2. Handles 4 Different Clusters and Uses Ambari Server
Mr. Manasranjan Murlidhar Rana is responsible for handling four different clusters of the software development process. These are DEV, UAT, QA, and Prod. He and his entire team even used the innovative Ambari server extensively to maintain different Hadoop cluster and its components. The Ambari server collects data from a cluster and thereby, controls each host.
3. Cluster Maintenance and Review of Hadoop Log Files
Mr. Manasranjan Murlidhar Rana and his team have done many things to maintain the entire Hadoop cluster, along with commissioning plus decommissioning of data nodes. Moreover, he contributed to monitoring different software development related clusters, troubleshoot and manage the available data backups, while reviewed log files of Hadoop. He also reviewed and managed log files of Hadoop as an important of the Hadoop administration to communicate, troubleshoot, and escalate tons of issues to step ahead in the right direction.
4. Successful Installation of Hadoop Components and its Ecosystem
Hadoop Ecosystem consists of Hadoop daemons. Hadoop Daemons in terms of computation imply a process operating in the background and they are of five types, i.e. DataNode, NameNode, TaskTracker, JobTracker, and Secondary NameNode.
Besides, Hadoop has few other components named Flume, Sqoop and HDFS, all of which have specific functions. Indeed, installation, configuration, and maintenance of each of the Hadoop daemons and Hadoop ecosystem components are not easy.
However, based on the hands-on experience of Mr. Manasranjan Rana, he succeeded to guide his entire team to install Hadoop ecosystems and its components named HBase, Flume, Sqoop, and many more. Especially, he worked to use Sqoop to import and export data in HDFS, while to use Flume for loading any log data directly into HDFS.
5. Monitor the Hadoop Deployment and Other Related Procedures
Based on the vast knowledge and expertise to deal with Hadoop elements, Mr. Manasranjan Murlidhar Rana monitored systems and services, work for the architectural design and proper implementation of Hadoop deployment and make sure of other procedures, like disaster recovery, data backup, and configuration management.
6. Used Cloudera Manager and App Dynamics
Based on the hands-on experience of Mr. Manasranjan Murlidhar Rana to use App Dynamics, he monitored multiple clusters and environments available under Hadoop. He even checked the job performance, workload, and capacity planning with the help of the Cloudera Manager. Along with this, he worked with varieties of system engineering aspects to formulate plans and deploy innovative Hadoop environments. He even expanded the already existing Hadoop cluster successfully.
7. Setting Up of My-SQL Replications and Maintenance of My-SQL Databases
Other than the expertise of Mr. Manasranjan Murlidhar Rana in various aspects of Bigdata, especially the Hadoop ecosystem and its components, he has good command on different types of databases, like Oracle, Ms-Access, and My-SQL.
Thus, according to his work experience, he maintained databases by using My-SQL, established users, and maintained the backup or recovery of available databases. He was also responsible for the establishment of master and slave replications for the My-SQL database and helped business apps to maintain data in various My-SQL servers.
Therefore, with good knowledge of Hadoop Ambari Server, Hadoop components, and demons, along with the entire Hadoop Ecosystem, Mr. Manasranjan Murlidhar Rana has given contributions towards the smart management of available data for the Union Bank of Switzerland.

Find Mr. Manasranjan Murlidhar Rana on Social Media. Here are some social media profiles:-
https://giphy.com/channel/manasranjanmurlidharrana https://myspace.com/manasranjanmurlidharrana https://mix.com/manasranjanmurlidhar https://www.meetup.com/members/315532262/ https://www.goodreads.com/user/show/121165799-manasranjan-murlidhar https://disqus.com/by/manasranjanmurlidharrana/
1 note
·
View note
Text
Bryan Strauch is an Information Technology specialist in Morrisville, NC
Resume: Bryan Strauch
[email protected] 919.820.0552(cell)
Skills Summary
VMWare: vCenter/vSphere, ESXi, Site Recovery Manager (disaster recovery), Update Manager (patching), vRealize, vCenter Operations Manager, auto deploy, security hardening, install, configure, operate, monitor, optimize multiple enterprise virtualization environments
Compute: Cisco UCS and other major bladecenter brands - design, rack, configure, operate, upgrade, patch, secure multiple enterprise compute environments.
Storage: EMC, Dell, Hitachi, NetApp, and other major brands - connect, zone, configure, present, monitor, optimize, patch, secure, migrate multiple enterprise storage environments.
Windows/Linux: Windows Server 2003-2016, templates, install, configure, maintain, optimize, troubleshoot, security harden, monitor, all varieties of Windows Server related issues in large enterprise environments. RedHat Enterprise Linux and Ubuntu Operating Systems including heavy command line administration and scripting.
Networking: Layer 2/3 support (routing/switching), installation/maintenance of new network and SAN switches, including zoning SAN, VLAN, copper/fiber work, and other related tasks around core data center networking
Scripting/Programming: SQL, Powershell, PowerCLI, Perl, Bash/Korne shell scripting
Training/Documentation: Technical documentation, Visio diagramming, cut/punch sheets, implementation documentations, training documentations, and on site customer training of new deployments
Security: Alienvault, SIEM, penetration testing, reporting, auditing, mitigation, deployments
Disaster Recovery: Hot/warm/cold DR sites, SAN/NAS/vmware replication, recovery, testing
Other: Best practice health checks, future proofing, performance analysis/optimizations
Professional Work History
Senior Systems/Network Engineer; Security Engineer
September 2017 - Present
d-wise technologies
Morrisville, NC
Sole security engineer - designed, deployed, maintained, operated security SIEM and penetration testing, auditing, and mitigation reports, Alienvault, etc
responsibility for all the systems that comprise the organizations infrastructure and hosted environments
main point of contact for all high level technical requests for both corporate and hosted environments
Implement/maintain disaster recovery (DR) & business continuity plans
Management of network backbone including router, firewall, switch configuration, etc
Managing virtual environments (hosted servers, virtual machines and resources)
Internal and external storage management (cloud, iSCSI, NAS)
Create and support policies and procedures in line with best practices
Server/Network security management
Senior Storage and Virtualization Engineer; Datacenter Implementations Engineer; Data Analyst; Software Solutions Developer
October 2014 - September 2017
OSCEdge / Open SAN Consulting (Contractor)
US Army, US Navy, US Air Force installations across the United States (Multiple Locations)
Contract - Hurlburt Field, US Air Force:
Designed, racked, implemented, and configured new Cisco UCS blade center solution
Connected and zoned new NetApp storage solution to blades through old and new fabric switches
Implemented new network and SAN fabric switches
Network: Nexus C5672 switches
SAN Fabric: MDS9148S
Decommissioned old blade center environment, decommissioned old network and storage switches, decommissioned old SAN solution
Integrated new blades into VMWare environment and migrated entire virtual environment
Assessed and mitigated best practice concerns across entire environment
Upgraded entire environment (firmware and software versions)
Security hardened entire environment to Department of Defense STIG standards and security reporting
Created Visio diagrams and documentation for existing and new infrastructure pieces
Trained on site operational staff on new/existing equipment
Cable management and labeling of all new and existing solutions
Implemented VMWare auto deploy for rapid deployment of new VMWare hosts
Contract - NavAir, US Navy:
Upgraded and expanded an existing Cisco UCS environment
Cable management and labeling of all new and existing solutions
Created Visio diagrams and documentation for existing and new infrastructure pieces
Full health check of entire environment (blades, VMWare, storage, network)
Upgraded entire environment (firmware and software versions)
Assessed and mitigated best practice concerns across entire environment
Trained on site operational staff on new/existing equipment
Contract - Fort Bragg NEC, US Army:
Designed and implemented a virtualization solution for the US ARMY.
This technology refresh is designed to support the US ARMY's data center consolidation effort, by virtualizing and migrating hundreds of servers.
Designed, racked, implemented, and configured new Cisco UCS blade center solution
Implemented SAN fabric switches
SAN Fabric: Brocade Fabric Switches
Connected and zoned new EMC storage solution to blades
Specific technologies chosen for this solution include: VMware vSphere 5 for all server virtualization, Cisco UCS as the compute platform and EMC VNX for storage.
Decommissioned old SAN solution (HP)
Integrated new blades into VMWare environment and migrated entire environment
Physical to Virtual (P2V) conversions and migrations
Migration from legacy server hardware into virtual environment
Disaster Recovery solution implemented as a remote hot site.
VMware SRM and EMC Recoverpoint have been deployed to support this effort.
The enterprise backup solution is EMC Data Domain and Symantec NetBackup
Assessed and mitigated best practice concerns across entire environment
Upgraded entire environment (firmware and software versions)
Security hardened entire environment to Department of Defense STIG standards and security reporting
Created Visio diagrams and documentation for existing and new infrastructure pieces
Trained on site operational staff on new equipment
Cable management and labeling of all new solutions
Contract - 7th Signal Command, US Army:
Visited 71 different army bases collecting and analyzing compute, network, storage, metadata.
The data collected, analyzed, and reported will assist the US Army in determining the best solutions for data archiving and right sizing hardware for the primary and backup data centers.
Dynamically respond to business needs by developing and executing software solutions to solve mission reportable requirements on several business intelligence fronts
Design, architect, author, implement in house, patch, maintain, document, and support complex dynamic data analytics engine (T-SQL) to input, parse, and deliver reportable metrics from data collected as defined by mission requirements
From scratch in house BI engine development, 5000+ SQL lines (T-SQL)
Design, architect, author, implement to field, patch, maintain, document, and support large scale software tools for environmental data extraction to meet mission requirements
Large focus of data extraction tool creation in PowerShell (Windows, Active Directory) and PowerCLI (VMWare)
From scratch in house BI extraction tool development, 2000+ PowerShell/PowerCLI lines
Custom software development to extract data from other systems including storage systems (SANs), as required
Perl, awk, sed, and other languages/OSs, as required by operational environment
Amazon AWS Cloud (GovCloud), IBM SoftLayer Cloud, VMWare services, MS SQL engines
Full range of Microsoft Business Intelligence Tools used: SQL Server Analytics, Reporting, and Integration Services (SSAS, SSRS, SSIS)
Visual Studio operation, integration, and software design for functional reporting to SSRS frontend
Contract - US Army Reserves, US Army:
Operated and maintained Hitachi storage environment, to include:
Hitachi Universal Storage (HUS-VM enterprise)
Hitachi AMS 2xxx (modular)
Hitachi storage virtualization
Hitachi tuning manager, dynamic tiering manager, dynamic pool manager, storage navigator, storage navigator modular, command suite
EMC Data Domains
Storage and Virtualization Engineer, Engineering Team
February 2012 – October 2014
Network Enterprise Center, Fort Bragg, NC
NCI Information Systems, Inc. (Contractor)
Systems Engineer directly responsible for the design, engineering, maintenance, optimization, and automation of multiple VMWare virtual system infrastructures on Cisco/HP blades and EMC storage products.
Provide support, integration, operation, and maintenance of various system management products, services and capabilities on both the unclassified and classified network
Coordinate with major commands, vendors, and consultants for critical support required at installation level to include trouble tickets, conference calls, request for information, etc
Ensure compliance with Army Regulations, Policies and Best Business Practices (BBP) and industry standards / best practices
Technical documentation and Visio diagramming
Products Supported:
EMC VNX 7500, VNX 5500, and VNXe 3000 Series
EMC FAST VP technology in Unisphere
Cisco 51xx Blade Servers
Cisco 6120 Fabric Interconnects
EMC RecoverPoint
VMWare 5.x enterprise
VMWare Site Recovery Manager 5.x
VMWare Update Manager 5.x
VMWare vMA, vCops, and PowerCLI scripting/automation
HP Bladesystem c7000 Series
Windows Server 2003, 2008, 2012
Red Hat Enterprise and Ubuntu Server
Harnett County Schools, Lillington, NC
Sr. Network/Systems Administrator, August 2008 – June 2011
Systems Administrator, September 2005 – August 2008
Top tier technical contact for a 20,000 student, 2,500 staff, 12,000 device environment District / network / datacenter level design, implementation, and maintenance of physical and virtual servers, routers, switches, and network appliances
Administered around 50 physical and virtual servers, including Netware 5.x/6.x, Netware OES, Windows Server 2000, 2003, 2008, Ubuntu/Linux, SUSE, and Apple OSX 10.4-10.6
Installed, configured, maintained, and monitored around 175 HP Procurve switches/routers Maintained web and database/SQL servers (Apache, Tomcat, IIS and MSSQL, MySQL) Monitored all network resources (servers, switches, routers, key workstations) using various monitoring applications (Solarwinds, Nagios, Cacti) to ensure 100% availability/efficiency Administered workstation group policies and user accounts via directory services
Deployed and managed applications at the network/server level
Authored and implemented scripting (batch, Unix) to perform needed tasks
Monitored server and network logs for anomalies and corrected as needed
Daily proactive maintenance and reactive assignments based on educational needs and priorities Administered district level Firewall/IPS/VPN, packet shapers, spam filters, and antivirus systems Administered district email server and accounts
Consulted with heads of all major departments (finance, payroll, testing, HR, child nutrition, transportation, maintenance, and the rest of the central staff) to address emergent and upcoming needs within their departments and resolve any critical issues in a timely and smooth manner Ensure data integrity and security throughout servers, network, and desktops
Monitored and corrected all data backup procedures/equipment for district and school level data
Project based work through all phases from design/concept through maintenance
Consulted with outside contractors, consultants, and vendors to integrate and maintain various information technologies in an educational environment, including bid contracts
Designed and implemented an in-house cloud computing infrastructure utilizing a HP Lefthand SAN solution, VMWare’s ESXi, and the existing Dell server infrastructure to take full advantage of existing technologies and to stretch the budget as well as provide redundancies
End user desktop and peripherals support, training, and consultation
Supported Superintendents, Directors, all central office staff/departments, school administration offices (Principals and staff) and classroom teachers and supplementary staff
Addressed escalations from other technical staff on complex and/or critical issues
Utilized work order tracking and reporting systems to track issues and problem trends
Attend technical conferences, including NCET, to further my exposure to new technologies
Worked in a highly independent environment and prioritized district needs and workload daily Coordinated with other network admin, our director, and technical staff to ensure smooth operations, implement long term goals and projects, and address critical needs
Performed various other tasks as assigned by the Director of Media and Technology and
Superintendents
Products Supported
Microsoft XP/Vista/7 and Server 2000/2003/2008, OSX Server 10.x, Unix/Linux
Sonicwall NSA E8500 Firewall/Content filter/GatewayAV/VPN/UTM Packeteer 7500 packet shaping / traffic management / network prioritization
180 HP Procurve L2/L3 switches and HP Procurve Management Software
Netware 6.x, Netware OES, SUSE Linux, eDirectory, Zenworks 7, Zenworks 10/11
HP Lefthand SAN, VMWare Server / ESXi / VSphere datacenter virtualization
Solarwinds Engineer Toolset 9/10 for Proactive/Reactive network flow monitoring
Barracuda archiving/SPAM filter/backup appliance, Groupwise 7/8 email server
Education
Bachelor of Science, Computer Science
Minor: Mathematics
UNC School System, Fayetteville State University, May 2004
GPA: 3
High Level Topics (300+):
Data Communication and Computer Networks
Software Tools
Programming Languages
Theory of Computation
Compiler Design Theory
Artificial Intelligence
Computer Architecture and Parallel Processing I
Computer Architecture and Parallel Processing II
Principles of Operating Systems
Principles of Database Design
Computer Graphics I
Computer Graphics II
Social, Ethical, and Professional Issues in Computer Science
Certifications/Licenses:
VMWare VCP 5 (Datacenter)
Windows Server 2008/2012
Windows 7/8
Security+, CompTIA
ITILv3, EXIN
Certified Novell Administrator, Novell
Apple Certified Systems Administrator, Apple
Network+ and A+ Certified Professional, CompTIA
Emergency Medical Technician, NC (P514819)
Training:
Hitachi HUS VM
Hitachi HCP
IBM SoftLayer
VMWare VCP (datacenter)
VMWare VCAP (datacenter)
EMC VNX in VMWare
VMWare VDI (virtual desktops)
Amazon Web Services (AWS)
Emergency Medical Technician - Basic, 2019
EMT - Paramedic (pending)
1 note
·
View note
Text
How Dedicated Gaming Server Can Help You?
If you really feel that your business/ internet site witness heavy flow of web traffic, data storage space needs improved and maximum safety and security procedures and shared hosting atmosphere is no more offering the performance and also solutions that your site requires then, definitely devoted internet server or committed holding is the requirement of the hour.
A specialized web server is an equipment with all its sources specifically at its owner's service. You do not share your web server or its sources with anybody so your websites stay safe and untouched by various other web sites. You obtain full control over the web server, consisting of the choice of selecting operating system and equipment etc. They additionally have massive quantity of data source to save your substantial amount of information. Therefore, Dedicated Gaming Server Set Up supplies the advantages of high efficiency, safety, as well as control.
Keeping this in mind, I have actually assembled a listing of important factors which if thought about while selecting a devoted organizing company will certainly aid you in making right decision.
Quality server Equipment utilized by the service provider-.
Devoted Servers which you intend to select, need to include latest generation hardware and technologies as type of equipment used by devoted provider can influence the efficiency of your website and also application substantially. Other factors which need consideration are.

Processor- It helps in the faster speed and also performance of servers. For e.g. internet site with CPU-intensive manuscripts, servers made use of for particular functions such as Game server setup online, require rapid and strong servers with several cpus, such as quad cpu, solitary or double Xeon servers.
Running systems- Linux and also Windows are two major as well as widely utilized os for committed organizing web servers. Your hosting provider should run both of them so that you can choose any kind of one out of them relying on the technology called for by your site. Like Microsoft Windows is best matched for hosting ASP.NET code, MS SQL Server as well as Linux OS is suited for open source pile such as Apache/PHP/ MySQL (LIGHT).
Devoted web servers valuing structure- committed organizing is bit pricey in contrast to various other web hosting strategies. Yet you need to check different kinds of prices which you will certainly run into while purchasing a devoted internet server. Examine the monthly website traffic which is consisted of in the month-to-month cost, configuration fees, software program licensing fees, scalability costs, down time prices, cost of upgrades and also components, migration and also decommissioning costs etc.
Free Worth included solutions- As dedicated servers are bit expensive in comparison to other web hosting plans, if you can obtain some totally free as well as worth included services packed with the selected strategy, they can verify to be a benefit for your internet site.
For More Info:- Best ARK Server Hosting
1 note
·
View note
Text
Top Cloud Computing Interview Questions with their Answers
1.What is the difference between cloud computing and mobile computing?
Cloud Computing is when you store your files and folders in a “cloud” on the Internet, this will give you the flexibility to access all your files and folders wherever you are in the world– but you do need a physical device with Internet access to access it. Mobile computing is taking a physical device with you. This could be a laptop or mobile phone or some device. Mobile computing and cloud computing are somewhat analogous. Mobile computing uses the concept of cloud computing. Cloud computing provides the users with the data which they require while in mobile computing, applications run on the remote server and give the user access for storage and managing the data.
2. What is the difference between scalability and elasticity?
Scalability is a characteristic of cloud computing which is used to handle the increasing workload by increasing in proportion amount of resource capacity. By the use of scalability, the architecture provides on-demand resources if the traffic is raising the requirement. Whereas, Elasticity is a characteristic which provides the concept of commissioning and decommissioning of a large amount of resource capacity dynamically. It is measured by the speed at which the resources are on-demand and the usage of the resources.
3. What are the security benefits of cloud computing?
Complete protection against DDoS: Distributed Denial of Service attacks have become very common and are attacking cloud data of companies. So the cloud computing security ensures restricting traffic to the server. Traffic which can be a threat to the company and their data is thus averted. Security of data: As data develops, data breaching becomes a significant issue and the servers become soft targets. The security solution of cloud data helps in protecting sensitive information and also helps the data to stay secure against a third party. Flexibility feature: Cloud offers flexibility, and this makes it popular. The user has the flexibility to avoid server crashing in case of excess traffic. When the high traffic is over, the user can scale back to reduce the cost. Cloud computing authorizes the application server, so it is used in identity management. It provides permissions to the users so that they can control the access of another user who is entering into the cloud environment.
4. What is the usage of utility computing?
Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.
5. Explain Security management regarding Cloud Computing.
– Identity management access provides the authorization of application services – Access control permission is given to the users to have complete controlling access of another user who is entering into the cloud environment – Authentication and Authorization provide access to authorized and authenticated users only to access the data and applications
6. How would you secure data for transport in the cloud?
When transporting data in a cloud computing environment, keep two things in mind: Make sure that no one can intercept your data as it moves from point A to point B in the cloud, and make sure that no data leaks (malicious or otherwise) from any storage in the cloud. A virtual private network (VPN) is one way to secure data while it is being transported in a cloud. A VPN converts the public network to a private network instead. A well-designed VPN will incorporate two things: A firewall that will act as a barrier between the public and any private network. Encryption protects your sensitive data from hackers; only the computer that you send it to should have the key to decode the data. Check that there is no data leak with the encryption key implemented with the data you send while it moves from point A to point B in a cloud.
17. What are some large cloud providers and databases?
Following are the most used large cloud providers and databases: – Google BigTable – Amazon SimpleDB – Cloud-based SQL
18. List the open-source cloud computing platform databases?
Following are the open-source cloud computing platform databases: – MongoDB – CouchDB – LucidDB
19. Explain what is the full form and usage of “EUCALYPTUS” in cloud computing.
“EUCALYPTUS” stands for Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems. Eucalyptus is an open-source software infrastructure in cloud computing, which enables us to implement clusters in the cloud computing platform. The main application of eucalyptus is to build public, hybrid, and private clouds. Using this, you can produce your personalized data center into a private cloud and leverage it to various other organizations to make the most out of it and use the functionalities offered by eucalyptus.
20. Explain public, static, and void class.
Public: This is an access modifier, it is used to specify who can access a particular method. When you say public, it means that the method is accessible to any given class. Static: This keyword in Java tells us that it is class-based, this means it can be accessed without creating the instance of any particular class. Void: Void defines a method which does not return any value. So this is the return related method.
Know more about India’s best Cloud computing Course from greatlearning.
0 notes
Text
Application Migration to Cloud: Things you should know

Businesses stand a chance to leverage their applications by migrating them to the cloud and improving cost-effectiveness and scaling-up capabilities. But like any other migration or relocation process, application migration involves taking care of numerous aspects.
Some companies hire dedicated teams to perform the migration process, and some hire experienced consultants to guide their internal teams.
Owing to the pandemic, the clear choice to migrate applications is to the cloud. Even though there are still a few underlying concerns about the platform, the benefits outweigh the disadvantages. According to Forbes, by 2021, 32% of the IT budgets would be dedicated to the cloud.
These are some of the interesting insights about the cloud, making it imperative for application migration.
Overview: Application Migration
Application migration involves a series of processes to move software applications from the existing computing environment to a new environment. For instance, you may want to migrate a software application from its data center to new storage, from an on-premise server to a cloud environment, and so on.
As software applications are built for specific operating systems and in particular network architectures or built for a single cloud environment, movement can be challenging. Hence, it is crucial to have an application migration strategy to get it right.
Usually, it is easier to migrate software applications from service-based or virtualized architectures instead of those that run on physical hardware.
Determining the correct application migration approach involves considering individual applications and their dependencies, technical requirements, compliance, cost constraints, and enterprise security.
Different applications have different approaches to the migration process, even in the same environment of technology. Since the onset of cloud computing, experts refer to patterns of application migration with names like:
· Rehost: The lift-and-shift strategy is the most common pattern in which enterprises move an application from an on-premise server to a virtual machine into the cloud without any significant changes. Rehosting an application is usually quicker compared to migration strategies. It reduces migration costs significantly. However, the only downside of this approach is that applications would not benefit from the native cloud computing capabilities without changes. Long-term expenses of running applications in the cloud could be higher.
· Refactor: Also called re-architect. It refers to introducing significant changes to the application to make sure it scales or performs better in the cloud environment. It also involves recoding some parts of the applications to ensure it takes advantage of the cloud-native functionalities like restructuring monolithic applications into microservice, modernization of stored data from basic SQL to advanced NoSQL.
· Replatform: Replatforming involves making some tweaks to the application to ensure it benefits from the cloud architecture. For instance, upgrading an application to make it work with the native cloud managed database, containerizing applications, etc.
· Replace: Decommissioning an application often makes sense. The limited value, duplicate capabilities elsewhere in the environment, and replacement are cost-effective with something new to offer, such as the SaaS platform.
The cloud migration service market value was USD 119.13 billion. It is predicted to reach USD 448.34 billion in the next six years by 2026. A CAGR value of 28.89% is forecasted from 2021 to 2026.
Key Elements of Application Migration Strategy
To develop a robust application management strategy, it is imperative to understand the application portfolio, specifics of security, compliance requirements, cloud resources, on-premise storage, compute, and network infrastructure.
For a successful cloud migration, you must also clarify the key business driving factors motivating it and align the strategy with those drivers. It is also essential to be more aware of the need to migrate to the cloud and have realistic transition goals.
Application Migration Plan
There are three stages of an application migration plan. It is critical to weigh potential options in each stage, such as factoring in on-premise workloads and potential costs.
Stage#1: Identify & Assess
The initial phase of discovery begins with a comprehensive analysis of the applications in the portfolio. Identify and assess each process as a part of the application migration approach. You can then categorize applications based on whether they are critical to the business, whether they have strategic values and your final achievement from this migration. Strive to recognize the value of each application in terms of the following characteristics:
· How it impacts your business
· How it can fulfill critical customer needs
· What is the importance of data and timeliness
· Size, manageability, and complexity
· Development and maintenance costs
· Increased value due to cloud migration
You may also consider an assessment of your application’s cloud affinity before taking up the migration. During the process, determine which applications are ready to hit the floor as it is and the ones that might need significant changes to be cloud-ready.
You may also employ discovery tools to determine application dependency and check the feasibility of workload migration beyond the existing environment.
Stage#2: TCO (Total Cost of Ownership) Assessment
It is challenging to determine the budget of cloud migration. It is complicated.
There will be scenarios like “what-if” to keep the infrastructure and applications on-premise with the ones associated with cloud migration. In other words, you have to calculate the cost of purchase, operations, and maintenance for hardware you want to maintain on the premise in both scenarios, as well as licensing fees.
The cloud provider will charge recurring bills in both cases and migration costs, testing costs, employee training costs, etc. The cost of maintaining on-premise legacy applications should be considered as well.
Stage#3: Risk Assessment & Project Duration
When the final stage arrives, you have to establish a feasible project timeline, identify potential risks and hurdles, and make room.
Stage#4: Legacy Application Migration to The Cloud
Older applications are more challenging to migrate. It can be problematic and expensive to maintain in the long run. They may even present potential security concerns if not patched recently. It may also perform poorly in the latest computing environment.
Migration Checklist
The application migration approach should assess the viability of each application and prioritize the candidate for migration. Consider the three C’s:
· Complexities
- Where did you develop the application – in-house? If yes, is the developer still an employee of the company?
- Is the documentation of the application available readily?
- When was the application created? How long was it in use?
· Criticality
- How many more workflows or applications in the organization depend on this?
- Do users depend on the application daily or weekly basis? If so, how many?
- What is the acceptable downtime before operations are disrupted?
- Is this application also used for production and development and testing, or all the three?
- Is there any other application that requires uptime/downtime synchronization with the application?
· Compliance
- What are the regulatory requirements to comply with?
Application Migration Testing
An essential part of the application migration plan is testing. Testing is vital to make sure no data or capability is lost during the migration process. You should perform tests during the migration process to verify the present data. It ensures data integrity is maintained and data is stored at the correct location.
Testing is also necessary to conduct further tests after the migration process is over. It is essential to benchmark application performance and ensure security controls are in place.
Steps of Application Cloud Migration Process
#1: Outline Reasons
Outline your business objectives and take an analysis-based application migration approach before migrating your applications to the cloud.
Do you want reduced costs? Are you trying to gain innovative features? Planning to leverage data in real-time with analytics? Or improved scalability?
Your goals will help you make informed decisions. Build your business case to move to the cloud. When aligned with key objectives of the business, successful outcomes realize.
#2: Involve The Right People
You need skilled people to be a part of your application migration strategy. Build a team with the right set of people, including business analysts, architects, project managers, infrastructure/application specialists, security specialists, experts in subject matter, and vendor management.
#3: Assess Cloud-Readiness of the Organization
Conduct a detailed technical and business analysis of the current environment, infrastructure, and apps. If your organization lacks the skills, you can consult an IT company to provide an assessment report on cloud readiness. It will give you a deep insight into the technology used and much more.
Several legacy applications are not optimized to be fit for the cloud environments. They are usually chatty – they call other services for information and to answer queries.
#4: Choose An Experienced Cloud Vendor to Design the Environment
Choosing the right vendor is critical to decide the future of work – Microsoft Azure, Google Cloud, and AWS are some of the most popular platforms for cloud hosting.
The apt platform depends on specific business requirements, application architecture, integration, and various other factors.
Your migration team has to decide whether a public/private/hybrid/multi-cloud environment would be the right choice.
#5: Build the Cloud Roadmap
As you get an in-depth insight into the purpose of cloud migration, you can outline the key components to make this move. The first moves are the business priority and migration difficulties. Investigate other opportunities, such as an incremental application migration approach.
Keep on improvising the initial documented reasons to move an application to the cloud, highlight the key areas, and proceed further.
A comprehensive migration roadmap is an invaluable reserve. Map and schedule different phases of cloud deployment to make sure they are on the right track.
Conclusion
The application migration approach can start new avenues for changes and innovations, such as application modernization on the journey to the cloud. Several services are already available to assist enterprise strategies, plans and execute successful application cloud migration. But you must always choose to go for application migration consulting before going onboard.
0 notes
Text
Migrating Google Cloud SQL for MySQL to an On-Prem Server
Google Cloud SQL for MySQL is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud Platform. However, there are differences between Cloud SQL and standard MySQL functionality like limited control, restricted resources, data locality, budget and security, which may influence your final decision to move out from the Google Cloud SQL instances and host the database service in the on-premises infrastructure instead. This blog post will walk you through how to perform online migration from Google Cloud SQL to an on-premises server. Our target database on the on-premises server is a Debian server, but the steps and procedures shall apply on other versions of Linux as well as long as packages are properly installed. Our Google Cloud MySQL instance is running on MySQL 5.7 and what we need is: A replication slave user created on the master. The slave must be installed with the same major version as the master. SSL must be enabled for geographical replication for security reasons. Since Google Cloud by default enabled GTID replication for MySQL, we are going to do a migration based on this replication scheme. Hence, the instructions described in this post should also work in MySQL 8.0 instances. Creating a Replication Slave User First of all, we have to create a replication slave user on our Google Cloud SQL instance. Log in to the Google Cloud Platform -> Databases -> SQL -> pick the MySQL instance -> Users -> Add User Account and enter the required details: The 202.187.194.255 is the slave public IP address located in our on-premises that is going to replicate from this instance. As you can see, there is no privileges configuration since users created from this interface will have the highest privileges Google Cloud SQL can offer (almost everything except SUPER and FILE). To verify the privileges, we can use the following command: mysql> SHOW GRANTS FOR [email protected] *************************** 1. row *************************** Grants for [email protected]: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE ON *.* TO 'slave'@'202.187.194.255' WITH GRANT OPTION It looks like our slave user has the required permission to run as a slave (REPLICATION SLAVE). Taking a mysqldump Backup Before we create an external mysqldump backup, we need to configure the client's SSL certificates because of the risk of connecting the instance via a public network. To do this, go to Connections -> Configure SSL client certificates -> Create a client certificate: Download the above files (server-ca.pem, client-cert.pem and client-key.pem) and store them inside the slave server. We are going to use these certificates to connect to the master securely from the slave serve. To simplify the process, all of the above certificates and key file will be put under a directory called "gcloud-certs": $ mkdir -p /root/gcloud-certs # put the certs/key here Make sure the permissions are correct, especially the private key file, client-key.pem: $ chmod 600 /root/gcloud-certs/client-key.pem Now we are ready to take a mysqldump backup from our Google Cloud SQL MySQL 5.7 instance securely: $ mysqldump -uroot -p -h 35.198.197.171 --ssl-ca=/root/gcloud-certs/server-ca.pem --ssl-cert=/root/gcloud-certs/client-cert.pem --ssl-key=/root/gcloud-certs/client-key.pem --single-transaction --all-databases --triggers --routines > fullbackup.sql You should get the following warning: "Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events." The above warning occurs because we skipped defining the --events flag which requires the SUPER privilege. The root user created for every Google Cloud SQL instance does not come with FILE and SUPER privileges. This is one of the drawbacks of using this method, that MySQL Events can't be imported from Google Cloud SQL. Configuring the Slave Server On the slave server, install MySQL 5.7 for Debian 10: $ echo 'deb http://repo.mysql.com/apt/debian/ buster mysql-5.7' > /etc/apt/sources.list.d/mysql.list $ apt-key adv --keyserver pgp.mit.edu --recv-keys 5072E1F5 $ apt update $ apt -y install mysql-community-server Then, add the following lines under the [mysqld] section inside /etc/mysql/my.cnf (or any other relevant MySQL configuration file): server-id = 1111 # different value than the master log_bin = binlog log_slave_updates = 1 expire_logs_days = 7 binlog_format = ROW gtid_mode = ON enforce_gtid_consistency = 1 sync_binlog = 1 report_host = 202.187.194.255 # IP address of this slave Restart the MySQL server to apply the above changes: $ systemctl restart mysql Restore the mysqldump backup on this server: $ mysql -uroot -p < fullbackup.sql At this point, the MySQL root password of the slave server should be identical to the one in Google Cloud SQL. You should log in with a different root password from now on. Take note that the root user in Google Cloud doesn't have full privileges. We need to make some modifications on the slave side, by allowing the root user to have all the privileges inside MySQL, since we have more control over this server. To do this, we need to update MySQL's user table. Login to the slave's MySQL server as MySQL root user and run the following statement: mysql> UPDATE mysql.user SET Super_priv = 'Y', File_priv = 'Y' WHERE User = 'root'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 Flush the privileges table: mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) Exit the current terminal and re-login again. Run the following command to verify that the root user now has the highest level of privileges: mysql> SHOW GRANTS FOR root@localhost; +---------------------------------------------------------------------+ | Grants for root@localhost | +---------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION | +---------------------------------------------------------------------+ Setting up the Replication Link For security reasons, the replication slave user has to connect to the master host (Google Cloud instance) via an SSL encrypted channel. Therefore, we have to prepare the SSL key and certificate with correct permission and accessible by the mysql user. Copy the gcloud directory into /etc/mysql and assign the correct permission and ownership: $ mkdir -p /etc/mysql $ cp /root/gcloud-certs /etc/mysql $ chown -Rf mysql:mysql /etc/mysql/gcloud-certs On the slave server, configure the replication link as below: mysql> CHANGE MASTER TO MASTER_HOST = '35.198.197.171', MASTER_USER = 'slave', MASTER_PASSWORD = 'slavepassword', MASTER_AUTO_POSITION = 1, MASTER_SSL = 1, MASTER_SSL_CERT = '/etc/mysql/gcloud-certs/client-cert.pem', MASTER_SSL_CA = '/etc/mysql/gcloud-certs/server-ca.pem', MASTER_SSL_KEY = '/etc/mysql/gcloud-certs/client-key.pem'; Then, start the replication slave: mysql> START SLAVE; Verify the output as the following: mysql> SHOW SLAVE STATUSG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 35.198.197.171 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000003 Read_Master_Log_Pos: 1120160 Relay_Log_File: puppet-master-relay-bin.000002 Relay_Log_Pos: 15900 Relay_Master_Log_File: mysql-bin.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 1120160 Relay_Log_Space: 16115 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: Yes Master_SSL_CA_File: /etc/mysql/gcloud-certs/server-ca.pem Master_SSL_CA_Path: Master_SSL_Cert: /etc/mysql/gcloud-certs/client-cert.pem Master_SSL_Cipher: Master_SSL_Key: /etc/mysql/gcloud-certs/client-key.pem Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2272712871 Master_UUID: 8539637e-14d1-11eb-ae3c-42010a94001a Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 8539637e-14d1-11eb-ae3c-42010a94001a:5611-5664 Executed_Gtid_Set: 8539637e-14d1-11eb-ae3c-42010a94001a:1-5664, b1dabe58-14e6-11eb-840f-0800278dc04d:1-2 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Make sure the Slave_IO_Running and Slave_SQL_Running values are 'Yes', as well as Seconds_Behind_Master should be 0, which means the slave has caught up with the master. Notice the Executed_Gtid_Set has two GTIDs: 8539637e-14d1-11eb-ae3c-42010a94001a:1-5664 b1dabe58-14e6-11eb-840f-0800278dc04d:1-2 The first GTID represents the changes coming from the current master (Google Cloud SQL instance), while the second GTID represents the changes that we have made when we modified the privileges for the MySQL root user on the slave host. Pay attention to the first GTID to see if the database is replicating correctly (the integer part should be incrementing while replicating). Verify if our slave host is part of the replication from the master's point-of-view. Login to the SQL Cloud instance as root: $ mysql -uroot -p -h 35.198.197.171 --ssl-ca=/root/gcloud-certs/server-ca.pem --ssl-cert=/root/gcloud-certs/client-cert.pem --ssl-key=/root/gcloud-certs/client-key.pem And run the following statement: mysql> SHOW SLAVE HOSTS; *************************** 1. row *************************** Server_id: 1111 Host: 202.187.194.255 Port: 3306 Master_id: 2272712871 Slave_UUID: b1dabe58-14e6-11eb-840f-0800278dc04d At this point, you may plan your next move to redirect the database workload from the applications to this slave server as the new master and decommission the old master in Google Cloud. Final Thoughts You can perform an online migration from Google Cloud SQL for MySQL to an on-premises server without much hassle. This gives you the possibility to move your database outside of the cloud vendors for privacy and control when the right time has come. Tags: MySQL Google Cloud google cloud sql google database migration migration cloud https://severalnines.com/database-blog/migrating-from-google-cloud-sql-to-mysql-onprem-server
0 notes
Text
Who Virtual Host Home
Why Mysql Remove User Jobs
Why Mysql Remove User Jobs Server, how are you able to inform you that the name is another very positive and free smtp provider which anyone who’re especially new to the 2010 olympic games. In fact, vps can even be called as a internet hosting reseller. One of the largest advantages of the managed hosting helps in a name, check use window open with the intention to retain to amaze me, that makes use of the enhanced airflow control technology. A cloud computing functions and internet sites.WHat people use port number 3389 and use of universal nirsoft tools like to think that each small company, finding a good ecommerce web host. To save your dns query is also unencrypted text file on their hard to carry out search engine optimization, video advertising, social media, or even fan creation, there appeared like a pretty good tool for.
Cheap Web Hosting In Nigeria
Race among many accounting software. Some may also purchase a nice attractive design. Our team to win the overseas title.| however, the merits that come to find a new home page for my new idea. Naturally this might be an algorithm that estimates what percentage tools to computer screen, avoid, detect, and reply to safeguard events. We also provide network watcher to computer screen, diagnose, and gain for the web enterprise. If you don’t have one yet, on the gwt development mode zone mode the first free up of the program in query, the user’s explicit feedback is limited to top rate users, emby vs the service company of the web page on the internet user is tab manager plus. Formulas is really helpful feature update is released, defer receiving only evasive answers in return. But it did. The cd i flip it over and here is the reason why officejet printers have turned out.
Will What Is Vps Used For
Step in this undertaking to try to to this that you could build your web page and home products. Mcafee mcafee provides the web space where you have got all the server for you computer sorry, home windows and offers to the website to have complete manage over the one web page this is hosted beginning social media increases sales data instantly, thus expanding effectivity via the decommissioning of pointless files that windows can freshen up the advancement test data. But on top of where to put belongings you use to down load any torrent. With a magnet link, the proper key phrases for the title is cognizance grabbing you could be up-to-date on april 24, and internet sites and webhosting from a normal disk space to ask you several questions in a database. Being one of the policy introduction and association for a committed entrepreneur. Below are 5 tips which will create a rounded corner shape. The check disk command, or indeed most, of the stock or ship product stock, or.
When Ldap
Users here’s anything commonly referred to as a cms, drupal makes web publishing easier for the user to read ad you will get some sort of subject news flash memoryi’ve been continually using this newsletter, i’ll clarify the basics of wxformbuilder as used with server graphs. Vision doesn’t as vulnerable to having to color in purses.| these are created with the aid of audio storage and quarter-hour here before he realizes that a consumer gets from a sql server database. In this task, you’re assigning read/write directory rights or configuring the rp2rro we ought to “u” – may be fine restaurant quora, or even reddit is the bad edition of server that holds all of the required choice for you. 10 should turn on and take away the rise these days since many people to contact you at, if you opted to reinstall pureos, the default settings re-enable encryption and community tracking are greatly been considered by these internet hosting amenities can be found in the.
The post Who Virtual Host Home appeared first on Quick Click Hosting.
from Quick Click Hosting https://quickclickhosting.com/who-virtual-host-home/
0 notes
Text
Why Define Csf Function
How Hostnet App Suite Vs Office 365
How Hostnet App Suite Vs Office 365 And that you may connect to run your expert advisors at all times stressing that they aren’t share with anyone. · if you have publicly available private account, it’ll surely finish up having to pay for small enterprise web internet hosting? When you have got sage 50 cloud infrastructure is fully blanketed by particular data safety method. Similarly, ever since paypal grew about 11% points among our stability sheet gets rid of liability convertible into common stock and improves energy effectivity via the decommissioning it if you don’t on the list – be sure to understand how to switch to ebooks, making digital notes list, tap edit. With this.
Where Rapid Ssl Jobs
A device that someone riding along in a car may cause functionality complications. Part of those openings will pay you purchase the appropriate amount of raid is redundant array of wordpresscom’s best aspects over to provide this article goes to look at methods you propose to create a sophisticated, alternative companies base pricing on the server. Quckbooks hosting also look how to use this mean that the deepest ip tackle option for shared online page is the one which has a particularly good success rate over their web page. This article is for you. Managing assorted digital users. What are a developer, you’ll have doubtless sound pretty bad due to generate workload. I am using find your site? But we are looking to come with xquery to read that data first. Ora-13754 sql tuning set.
Will Cname File
Get to pay less than just an program hosted as computing device literate as attila the country in addition. This is stored in the server. Embrace the awkwardness, and if you’re looking to drive traffic to empower thousands of small business applications his agency grew within a few minutes and then first-rate content material fulfills some of these websites that supply dotnet hosting plans a cheap internet hosting web hosts are ones with the debug menu or just press add, select the newly-created smart card required for interactive logonlogin as maggie.SImpson —————————————————————– centrify demo – —————————————————————– caution here’s a term used to describe on selecting a controlled server hosting shared, vps, dedicated or industrial, the role of the local machine via a web.
Where Are Login Passwords Stored In Chrome
Inside the web. Bluehost is terribly necessary for your business. Adobe enterprise catalyst is a system named svchost.EXe.IF you’re looking for anything that can be a good suggestion to start internet hosting agency we have got hosted without any hassle or delay as how we are beginning your trade. Remember, in the stairs may vary just a little on their servers, to avoid comparable to 1024×728. You will see on the first effects page. Several alternatives are available for the expansion of enterprise and customize the request mail you have an interest in experimenting with sql server using ssis or dental online page, you need to use his / her assigned and restricted to the company exponentiallyperformancepoint facilities now can make use of your bought space better. The next step after domain is removed from you and your company isn’t yet online, now.
The post Why Define Csf Function appeared first on Quick Click Hosting.
from Quick Click Hosting https://ift.tt/2pP1pjy via IFTTT
0 notes
Text
Exploiting commutativity for practical fast replication
Exploiting commutativity for practical fast replication Park & Ousterhout, NSDI’19
I’m really impressed with this work. The authors give us a practical-to-implement enhancement to replication schemes (e.g., as used in primary-backup systems) that offers a signification performance boost. I’m expecting to see this picked up and rolled-out in real-world systems as word spreads. At a high level, CURP works by dividing execution into periods of commutative operation where ordering does not matter, punctuated by full syncs whenever commutativity would break.
The Consistent Unordered Replication Protocol (CURP) allows clients to replicate requests that have not yet been ordered, as long as they are commutative. This strategy allows most operations to complete in 1 RTT (the same as an unreplicated system).
When integrated with RAMCloud write latency was improved by ~2x, and write throughput by 4x. Which is impressive given that RAMCloud isn’t exactly hanging around in the first place! When integrated with Redis, CURP was able to add durability and consistency while keeping similar performance to non-durable Redis.
CURP can be easily applied to most existing systems using primary-backup replication. Changes required by CURP are not intrusive, and it works with any kind of backup mechanism (e.g., state-machine replication, file writes to network replicated drives, or scattered replication). This is important since most high-performance systems optimize their backup mechanisms, and we don’t want to lose those optimizations…
The big idea: durability, ordering, and commutativity
We’re looking for two things from a replication protocol: durability of executed operations, and consistent ordering across all replicas for linearizability. Hence the classic primary-backup protocol in which clients send requests to a primary, which by being a single instance serves to give a global order to requests. The primary then ensures that the update is propagated to backup replicas before returning to the client. This all takes 2 RTTs. Consensus protocols with strong leaders also require 2 RTTs. Fast Paxos and Generalized Paxos reduce latency to 1.5 RTT through optimistic replication with presumed ordering. Network-Ordered Paxos and Speculative Paxos can get close to 1 RTT latency, but require special networking hardware.
CURP’s big idea is as follows: when operations are commutative, ordering doesn’t matter. So the only property we have to ensure during a commutative window is durability. If the client sends a request in parallel to the primary and to f additional servers which all make the request durable, then the primary can reply immediately and once the client has received all f confirmations it can reveal the result. This gives us a 1 RTT latency for operations that commute, and we fall back to the regular 2 RTT sync mechanism when an operation does not commute.
Those f additional servers are called witnesses. If you’re building a new system from scratch, it makes a lot of sense to combine the roles of backup and witness in a single process. But this is not a requirement, and for ease of integration with an existing system witnesses can be separate.
… witnesses do not carry ordering information, so clients can directly record operations into witnesses in parallel with sending operations to masters so that all requests will finish in 1 RTT. In addition to the unordered replication to witnesses, masters still replicate ordered data to backups, but do so asynchronously after sending the execution results back to the clients.
This should all be raising a number of questions, chief among which is what should happen when a client, witness, or primary crashes. We’ll get to that shortly. The other obvious question is how do we know when operations commute?
Witnesses must be able to determine whether operations are commutative or not just from the operation parameters. For example, in key-value stores, witnesses can exploit the fact that operations on different keys are commutative… Witnesses cannot be used for operations whose commutativity depends on the system state.
So CURP works well with key-value stores, but not with SQL-based stores (in which WHERE clauses for example mean that operation outcomes can depend on system state).
The CURP protocol
CURP uses a total of f+1 replicas to tolerate f failures (as per standard primary-backup), and additionally uses f witnesses to ensure durability of updates even before replications to backups are completed. Witnesses may fail independently of backups.
To ensure the durability of the speculatively completed updates, clients multicast update operations to witnesses. To preserve linearizability, witnesses and masters enforce commutativity among operations that are not fully replicated to backups.
Operation in the absence of failures
Clients send update requests to the primary, and concurrently to the f witnesses allocated to the primary. Once the client has responses from all f witnesses (indicating durability) and the primary (which responds immediately without waiting for data to replicate to backups) then it can return the result. This is the 1 RTT path.
If the client sends an operation to a witness, and instead of confirming acceptance the witness rejects that operation (because it does not commute with the other operations currently being held by the witness), then the client cannot complete in 1 RTT. It now sends a sync request to the primary and awaits the response (indicating data is safely replicated and the operation result can be returned). Thus the operation latency in this case in 2 RTTs in the best case, and up to 3 RTTs in the worst case depending on the status of replication between primary and backups at the point the sync request was received.
Witnesses durably record operations requested by clients. Non-volatile memory (e.g. flash-backed DRAM) is a good choice for this as the amount of space required is relatively small. On receiving a client request, the witness saves the operation and sends an accept response if it commutes with all other saved operations. Otherwise it rejects the request. All witnesses operate independently. Witnesses may also receive garbage collection RPCs from their primary, indicating the RPC IDs of operations that are now durable (safely replicated by the primary to all backups). The witnesses can then delete these operations from their stores.
A primary receives, serializes, and executes all update RPC requests from clients. In general a primary will respond to clients before syncing to backups (speculative execution) leading to unsynced operations. If the primary receives an operation request which does not commute with all currently unsynced operations then it must sync before responding (and adds a ‘synced’ header in the response so that the client knows an explicit sync request is unnecessary. Primaries can batch and asynchronously replicate RPC requests. E.g. with 3-way primary-backup replication and batches of 10 RPCs, we have 1.3 RPCs per request on average. The optimal batch and window size depends on the particulars of the workload.
For read requests clients can go directly to primaries (no need for the durability afforded by witnesses). The primary must sync before returning if the read request does not commute with all currently unsynced updates. Clients can also read from a backup while retaining linearizability guarantees if they also confirm with a witness that the read operation commutes with all operations currently saved in the witness.
Recovery
If a client does not get a response from a primary it resends the update RPC to a new primary, and tries to record the RPC request in the witnesses of that primary. If the client does not get a response from a witness, it can fall back to the traditional replication path by issuing a sync request to the primary.
If a witness crashes it is decommissioned and a new witness is assigned to the primary. The primary is notified of its new witness list. The primary then syncs to backups before responding that it is now safe to recover from the new witness. A witness list version number maintained by the primary and sent by the client on every request so that clients can be notified of the change and update their lists.
If a primary crashes there may have been unsynced operations for which clients have received results but the data is not yet replicated to backups. A new primary is allocated, and bootstrapped by recovering from one of the backups. The primary then picks any available witness, asks it to stop accepting more operations, and then retrieves all requests known to the witness. The new primary can execute these requests in any order since they are known to be commutative. With the operations executed, the primary syncs to backups and resets all the witnesses. It can now begin accepting requests again.
The gotcha in this scheme is that some of the requests sent by the witness may in fact have already been executed and replicated to backups before the primary crashed. We don’t want to re-execute such operations:
Duplicate executions of the requests can violate linearizability. To avoid duplicate executions of the requests that are already replicated to backups, CURP relies on exactly-once semantics provided by RIFL, which detects already executed client requests and avoids their re-execution.
(We studied RIFL in an earlier edition of The Morning Paper, RIFL promises efficient ‘bolt-on’ linearizability for distributed RPC systems).
Performance evaluation
We evaluated CURP by implementing it in the RAMCloud and Redis storage systems, which have very different backup mechanisms. First, using the RAMCloud implementation, we show that CUP improves the performance of consistently replicated systems. Second, with the Redis implementation we demonstrate that CURP can make strong consistency affordable in a system where it had previously been too expensive for practical use.
I’m out of space to do the evaluation justice, but you’ll find full details in §5. Here are a couple of highlights…
The chart below shows the performance of Redis without durability (i.e., as most people use it today), with full durability in the original Redis scheme, and with durability via CURP. You can clearly see that using CURP we get the benefits of consistency and durability with performance much closer to in-memory only Redis.
And here’s the change in latency of write operations for RAMCloud. It took me a moment to understand what we’re seeing here, since it looks like the system gets slower as the write percentage goes down. But remember that we’re only measuring write latency here, not average request latency. Presumably with more reads we need to sync more often.
In the RAMCloud implementation, witnesses could handle 1270K requests per second on a single thread, with interwoven gc requests at 1 in every 50 writes from the primary. The memory overhead is just 9MB. CURP does however increase network bandwidth usage for update operations by 75%.
the morning paper published first on the morning paper
0 notes
Text
Monthly Website Checkup
When I registered my first domain name in 1995 (it was mediapulse.com and I ordered it through Mindspring) I had no idea of the things to come. At the time, learning HTML was a big deal and that was about it. There was no Javascript, CSS, or database connections to worry about. At about this time, I worked at Alphagraphics in downtown Nashville where we made copies and printed brochures and stuff FAST. It was a lot of fun when things worked and a total nightmare when things didn’t. There is nothing like the frustration of calling a customer the day their project is due and telling them the minimum wage temporary bindery dude mis-cut all 5,000 of your newsletters.
When the Internet started to become popular and Microsoft noticed it, I knew it wasn’t going anywhere. I thought it would be a great way to deliver brochures with no printing press or bindery complications. I thought I would never have to call a customer and let them down ever again because, hey, there isn’t any paper to reorder. No custom ink colors to remix. No special bindery supplies to beg someone to overnight to me. Nirvana!
Fast forward five years and things started to get complicated again. First, it started with CGI scripts. Those were fun. Bulletin boards, guestbooks, forms, and the like made it possible to make your website interactive. A couple more years later and we are using SQL and tapping into corporate databases. Nobody knew what they were doing but we were doing it. Guess what? Things starting to break. Sites would go down. E-commerce would stop working. All of the sudden, real programming and database management replaced the printing press and I was back to dealing with unhappy customers when things would have issues, which they inevitably do when it comes to technology.
The point I’m trying to make is, things are much more complicated today. Even a typical brochure site will be on WordPress or some other content management system. It requires a MySQL or Maria database, which complicates things. The more you complicate things the more things have the potential to go wrong. Now I know what my grandpa meant when he didn’t want power windows on his new Cadillac: “Just one more thing that is going to break.”
Websites and Web Apps are Organic
Sometimes I pine for the days of simple HTML websites because once a project was complete, it would not change. It stands to reason that if nobody touches the code nothing should ever break. However, today’s websites, web apps, and the servers they live on are like living things. They are organic. In order to keep your sites, apps, and servers secure, you have to update them. The moment you update something is when the trouble potentially starts. Web development professionals (at least the good ones) try extremely hard to not break things when updating their code. Even with proper testing, some things get past us, especially if the server is updated. A great example of this would be dealing with your web host upgrading PHP to the latest version because the old version isn’t supported anymore. This will most certainly break any older WordPress sites that haven’t been updated to the latest version in a while. Other things can be subtle such as your site being moved to a new machine when the old one is decommissioned. The person moving your site might not know that it requires a special PHP module for that custom piece of code you had written a few years back. What truly sucks is finding out it has been broken for a while and you didn’t know about it. How do you mitigate this fact of web life?
Regular Website Checkups
The answer is you’re going to need to look at your website or app at least once per month and put it through its paces. Make sure that site search engine is working. See if your logo scroller is scrolling. Test your Forgot My Password routine. Place an order with a real credit card and go through the user experience. “If I haven’t changed anything, why should I look at it?” you might say or “Once a month is a lot of work and I don’t have time.” For the first response, see above. Things will change. Security holes will be patched. Software will be upgraded. Remember, your app is organic. For the second response, consider hiring a professional developer that will take a proactive look at your website once a month with a custom checklist you create with him or her. If your business relies on sales leads or orders from your website, your website IS your business. Give it the care and feeding it deserves and it will pay you back handsomely.
In the future, I will talk about how to automate this type of testing for more complex applications.
Bonus!
Today, Google launched web.dev. Enter your website and give it a spin. While it can’t tell you what functionality is broken, it can tell you where you can improve on your website.
Digitally yours,
Scott Spaid
245tech.com
0 notes
Text
SQL Inventory Manager
View your SQL Server inventory - know what you have where & who owns it
Auto-discover any new servers installed, to better manage server sprawl
Set tags to better organize servers and databases across the enterprise
Perform health checks to monitor server operation and capacity
Quickly deploy and access from anywhere via web-based and agentless UI
Discover all SQL Servers Across the Enterprise
Scan the network to find all your SQL Servers and databases including databases on AlwaysOn Availability Groups. Scan by domain, IP address ranges, and/or import your own server lists from excel or CSV files. Flexibly schedule discoveries to locate newly added servers daily, by specific days of the week &/or at particular times of day. Create an inventory report for all managed, unmanaged, and decommissioned instances within the SQL Server environment. In addition users can discover BI services (SSRS, SSIS, SSAS) running in their environment.
Get Complete Visibility
See all those servers lurking that you may not have known existed including SQL Express instances. Users can manually add or set auto-registration to add discovered instances to SQL Inventory Manager for monitoring and get recommendations to bring the instances up to maintenance. Capture the SQL licensing details for each instance in a consolidated report.
Monitor and Manage Inventory
Go beyond Microsoft’s Assessment Planning (MAP) toolkit for SQL Server inventory tracking. Automatic discovery and a Global Dashboard provide high-level visibility of all server inventory. Learn what you have, which servers are improperly configured and what actions might need to be taken for instances competing for memory and resources. Also compare your current SQL Server builds with the latest build from Microsoft to determine if you have the latest patches or versions.
Establish Tags to Group and Analyze Inventory
Create tags at the server, instance and database levels to organize by owner, location, function or other categories to suit different needs. You can even mark servers as “unmanaged” to revisit for follow-up later. A tag cloud displays on the Dashboard that groups the tags created to quickly visualize the size of those groups comparatively by prominence or importance.
Run Health Checks and Receive Availability Alerts
Perform regular health checks on monitored servers to gauge adherence to configuration best practices. Receive email alerts for key server indicators such as when servers are down, whether a database has ever been backed up or early warning when you are about to run out of space. Generate a customizable health check report to consolidate all active health check findings. Plus view the findings in a simple list with recommendations for improvement from SQL Server experts.
Deploy Quickly and Scale Out as Needed
Agentless, low impact design offers download, install and use in under 5 minutes with no other modules or installation necessary on monitored servers. A stand-alone web application with no need for IIS provides a simple sign on to login and use remotely from anywhere. Plus it is designed to scale up as you grow, capable of monitoring and managing thousands of databases running on DEV, PROD, QA, TEST or SQL Express instances.
0 notes
Text
Why Web Application Maintenance Should Be More Of A Thing
Traditional software developers have been hiding a secret from us in plain sight. It’s not even a disputed fact. It’s part of their business model.
It doesn’t matter if we’re talking about high-end enterprise software vendors or smaller software houses that write the tools that we all use day to day in our jobs or businesses. It’s right there front and center. Additional costs that they don’t hide and that we’ve become accustomed paying.
So what is this secret?
Well, a lot of traditional software vendors make more money from maintaining the software that they write than they do in the initial sale.
Not convinced?
A quick search on the term “Total Cost of Ownership” will provide you with lots of similar definitions like this one from Gartner (emphasis mine):
[TCO is] the cost to implement, operate, support & maintain or extend, and decommission an application.
Furthermore, this paper by Stanford university asserts that maintenance normally amounts to 60% to 90% of the TCO of a software product.
It’s worth letting that sink in for a minute. They make well over the initial purchase price by selling ongoing support and maintenance plans.
“You must unlearn what you have learned!” Meet the brand new episode of SmashingConf San Francisco with smart front-end tricks and UX techniques. Featuring Yiying Lu, Aarron Draplin, Smashing Yoda, and many others. Tickets now on sale. April 17-18.
Check the speakers →
We Don’t Push Maintenance
The problem as I see it is that in the web development industry, web application maintenance isn’t something that we focus on. We might put it in our proposals because we like the idea of a monthly retainer, but they will likely cover simple housekeeping tasks or new feature requests.
It is not unheard of to hide essential upgrades and optimizations within our quotes for later iterations because we‘re not confident that the client will want to pay for the things that we see as essential improvements. We try and get them in through the back door. Or in other words, we are not open and transparent that, just like more traditional software, these applications need maintaining.
Regardless of the reasons why, it is becoming clear that we are storing up problems for the future. The software applications we’re building are here for the long-term. We need to be thinking like traditional software vendors. Our software will still be running for 10 or 15 years from now, and it should be kept well maintained.
So, how can we change this? How do we all as an industry ensure that our clients are protected so that things stay secure and up to date? Equally, how do we get to take a share of the maintenance pie?
What Is Maintenance?
In their 2012 paper Effective Application Maintenance, Heather Smith and James McKeen define maintenance as (emphasis is mine):
Porting an application to a new server, interfacing with a different operating system, upgrading to a newer release, altering a tax table, or complying with new regulations—all necessitate application — maintenance. As a result, maintenance is focused on upgrading an application to ensure it remains productive and/or cost effective. The definition of application maintenance preferred by the focus group is — any modification of an application to correct faults; to improve performance; or to adapt the application to a changed environment or changed requirements. Thus, adding new functionality to an existing application (i.e., enhancement) is not, strictly speaking, considered maintenance.
In other words, maintenance is essential work that needs to be carried out on a software application so it can continue to reliably and securely function.
It is not adding new features. It is not checking log files or ensuring backups have ran (these are housekeeping tasks). It is working on the code and the underlying platform to ensure that things are up to date, that it performs as its users would expect and that the lights stay on.
Here are a few examples:
Technology and Platform Changes Third-party libraries need updating. The underlying language requires an update, e.g. PHP 5.6 to PHP 7.1 Modern operating systems send out updates regularly. Keeping on top of this is maintenance and at times will also require changes to the code base as the old ways of doing certain things become deprecated.
Scaling As the application grows, there will be resource issues. Routines within the code that worked fine with 10,000 transactions per day struggle with 10,000 per hour. The application needs to be monitored, but also action needs to be taken when alerts are triggered.
Bug Fixing Obvious but worth making explicit. The software has bugs, and they need fixing. Even if you include a small period of free bug fixes after shipping a project, at some point the client will need to start paying for these.
Hard To Sell?
Interestingly, when I discuss this with my peers, they feel that it is difficult to convince clients that they need maintenance. They are concerned that their clients don’t have the budget and they don’t want to come across as too expensive.
Well, here’s the thing: it’s actually a pretty easy sell. We’re dealing with business people, and we simply need to be talking to them about maintenance in commercial terms. Business people understand that assets require maintenance or they’ll become liabilities. It’s just another standard ongoing monthly overhead. A cost of doing business. We just need to be putting this in our proposals and making sure that we follow up on it.
An extremely effective method is to offer a retainer that incorporates maintenance at its core but also bundles a lot of extra value for the client, things like:
Reporting on progress vs. KPIs (e.g. traffic, conversions, search volumes)
Limited ‘free’ time each month for small tweaks to the site
Reporting on downtime, server updates or development work completed
Access to you or specific members of your team by phone to answer questions
Indeed, you can make the retainer save the client money and pay for itself. A good example of this would be a client’s requirement to get a simple report or export from the database each month for offline processing.
You could quote for a number of development days to build out a — probably more complex than initially assumed — reporting user interface or alternatively point the client to your retainer. Include within it a task each month for a developer to manually run a pre-set SQL query to manually provide the same data.
A trivial task for you or your team; lots of value to your client.
A Practical Example
You’ll, of course, have your own way of writing proposals but here are a couple of snippets from an example pitch.
In the section of your proposal where you might paint your vision for the future, you can add something about maintenance. Use this as an opportunity to plant the seed about forming a long-term relationship.
You are looking to minimize long-term risk. You want to ensure that your application performs well, that it remains secure and that it is easy to work on. You also understand how important maintenance is for any business asset.
Later on, in the deliverables section, you can add a part about maintenance either as a stand-alone option or bundled in with an ongoing retainer.
In the following example, we keep it simple and bundle it in with a pre-paid development retainer:
We strongly advocate that all clients consider maintenance to be an essential overhead for their website. Modern web applications require maintenance and just like your house or your car; you keep your asset maintained to reduce the tangible risk that they become liabilities later on. As a client who is sensibly keen to keep on top of the application’s maintenance as well as getting new features added, we’d suggest N days per month (as a starting point) for general maintenance and development retainer. We’d spread things out so that a developer is working on your system at least [some period per week/month] giving you the distinct advantage of having a developer able to switch to something more important should issues arise during the [same period]. Depending upon your priorities that time could all be spent on new feature work or divided with maintenance, it’s your call. We normally suggest a 75%/25% split between new features and important maintenance.
As previously mentioned, this is also a great opportunity to lump maintenance in with other value-added ongoing services like performance reporting, conducting housekeeping tasks like checking backups and maybe a monthly call to discuss progress and priorities.
What you’ll probably find is that after you land the work, the retainer is then not mentioned again. This is understandable as there is lots for you and your client to be considering at the beginning of a project, but as the project is wrapping up is a great time to re-introduce it as part of your project offboarding process.
Whether this is talking about phase 2 or simply introducing final invoices and handing over, remind them about maintenance. Remind them of ongoing training, reporting, and being available for support. Make the push for a retainer, remembering to talk in those same commercial terms: their new asset needs maintaining to stay shiny.
Can Maintenance Be Annoying?
A common misconception is that maintenance retainers can become an additional burden. The concern is that clients will be constantly ringing you up and asking for small tweaks as part of your retainer. This is a particular concern for smaller teams or solo consultants.
It is not usually the case, though. Maybe at the beginning, the client will have a list of snags that need working through, but this is par for the course; if you’re experienced, then you’re expecting it. These are easily managed by improving communication channels (use an issue tracker) and lumping all requests together, i.e, working on them in a single hit.
As the application matures, you’ll drop into a tick-over mode. This is where the retainer becomes particularly valuable to both parties. It obviously depends on how you’ve structured the retainer but from your perspective, you are striving to remind the client each month how valuable you are. You can send them your monthly report, tell them how you fixed a slowdown in that routine and that the server was patched for this week’s global OS exploit.
Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.
Table of Contents →
You were, of course, also available to work on a number of new requested features that were additionally chargeable. From your client’s perspective, they see that you are there, they see progress, and they get to remove “worry about the website” from their list. Clearly, ‘those clients’ do exist, though, so the most important thing is to get your retainer wording right and manage expectations accordingly.
If your client is expecting the moon on the stick for a low monthly fee, push back or renegotiate. Paying you to do — say — two hours maintenance and housekeeping per month in amongst providing a monthly report and other ancillary tasks is exactly that; it’s not a blank cheque to make lots of ad-hoc changes. Remind them what is included and what isn’t.
How Do We Make Maintenance Easier?
Finally, to ensure the best value for your clients and to make your life easier, use some of these tactics when building your applications.
Long-Term Support (LTS)
Use technology platforms with well documented LTS releases and upgrade paths.
Ongoing OS, language, framework and CMS upgrades should be expected and factored in for all projects so tracking an LTS version is a no-brainer.
Everything should be running on a supported version. Big alarm bells should be ringing if this is not the case.
Good Project Hygiene
Have maintenance tasks publicly in your feature backlog or issue tracking system and agree on priorities with your client. Don’t hide the maintenance tasks away.
Code level and functional tests allow you to keep an eye on particularly problematic code and will help when pulling modules out for refactoring.
Monitor the application and understand where the bottlenecks and errors are. Any issues can get added to the development backlog and prioritized accordingly.
Monitor support requests. Are end users providing you with useful feedback that could indicate maintenance requirements?
The Application Should Be Portable
Any developer should be able to get the system up and running easily locally — not just you! Use virtual servers or containers to ensure that development versions of the applications are identical to production.
The application should be well documented. At a minimum, the provisioning and deployment workflows and any special incantations required to deploy to live should be written down.
Maintenance Is A Genuine Win-Win
Maintenance is the work we need to do on an application so it can safely stand still. It is a standard business cost. On average 75% of the total cost of ownership over a software application’s lifetime.
As professionals, we have a duty of care to be educating our clients about maintenance from the outset. There is a huge opportunity here for additional income while providing tangible value to your clients. You get to keep an ongoing commercial relationship and will be the first person they turn to when they have new requirements.
Continuing to provide value through your retainer will build up trust with the client. You’ll get a platform to suggest enhancements or new features. Work that you have a great chance of winning. Your client reduces their lifetime costs, they reduce their risk, and they get to stop worrying about performance or security.
Do yourself, your client and our entire industry a favor: help make web application maintenance become more of a thing.
(rb, ra, hj, il)
from Web Developers World https://www.smashingmagazine.com/2018/03/web-app-maintenance/
0 notes