ayejayque
ayejayque
AJQ.Training
489 posts
Don't wanna be here? Send us removal request.
ayejayque · 2 years ago
Text
A Comparison Between Six Sigma & Total Quality Management
Tumblr media
Six Sigma and Quality Management Six Sigma is a business methodology that improves the value of processes by diminishing and finally removing the faults and disparities. The notion of Six Sigma was presented by Motorola in 1986. It gained popularity when Jack Welch, the CEO of General Electric adopted it too. The concept came into reality when one of Motorola’s seniors complained of Motorola’s bad quality. Bill Smith finally formulated the practice in 1986. Quality is important in the accomplishment and disappointment of an organization. Neglecting quality may prove to be very costly in the long run. Six Sigma guarantees higher quality of products by eliminating the flaws in the processes and systems. Six Sigma identifies and removes the hurdles that are in the way when reaching higher levels of perfection. According to Six Sigma, all challenges are defects and need to be subtracted. Six Sigma has distinct levels for practitioners. They are called “Green belts”, “Black belts” and so on. People who have attained these levels are known as experts in the Six Sigma process. As per Six Sigma, all processes that do not contribute to customer satisfaction are defects and have to be removed from the system. This ensures superior quality of products & services. All businesses strive hard in the pursuit of the quality of their brand and Six Sigma ensures it by removing defects that hinder customer satisfaction. The process of Six Sigma started with the manufacturing process but now is used in other businesses too. Six Sigma implementation needs adequate budgets and resource allocations. There are two Six Sigma methods. They are DMAIC & DMADV. DMAIC improves existing practices. DMADV creates new strategies and policies. DMAIC Method D - Define the problem. In this phase, the problems need to be clearly defined. Customer feedback is very important in this aspect. It is meticulously monitored to know the problem areas and why these problems occur. M - Measure the current process. This is done by using relevant data that gives an insight into the present processes. A - Analyze the data. This is the process of verifying the data. The root causes of the problems are studied and investigated to understand how they impact the whole process. I - Improve the current processes. This is done on the basis of the research and analysis of the previous stage. New projects are devised and created that ensure superior quality. C - Control the processes. This is done so that other problems and defects do not occur. DMADV Method D - Design strategies and processes that guarantee customer satisfaction. M - Measure and identify important quality parameters. A - Analyze and develop contingencies to ensure superior quality. D - Design all minute details and processes. V - Verify all processes before implementing the same. Comparison of Six Sigma and Total Quality Management Six Sigma and TQM are effective tools for quality management. They do the same job but some differences exist between the two. Six Sigma is a newer concept than TQM but is not its replacement. The basic difference between the two is that TQM delivers superior quality, while Six Sigma gets better results. TQM demands continuous effort for high-quality products. Six Sigma incorporates small changes in the systems. These changes yield effective results and a better level of customer satisfaction. TQM is about designing and developing new systems and processes. It allows coordination among the departments. New processes are developed on account of feedback and research. TQM maintains existing quality standards and Six Sigma makes small changes in the processes & systems to ensure high quality. The TQM process never saturates. There comes a time in this process after which no further improvements are possible. Six Sigma hardly reaches the saturation stage because it always initiates the next quality process in line. TQM improves existing policies and procedures. Six Sigma improves quality. TQM ensures that each worker is working for the improvement of present processes, systems, services, and work culture. Six Sigma focuses on identifying and removing the defects that stand between a business and success. TQM improves the present policies and tweaks the policies and the systems to achieve superior quality. Six Sigma businesses focus on removing errors and defects to get high-quality products. TQM is less intricate than Six Sigma. Six Sigma needs trained people and TQM does not need any extensive training. Six Sigma has special levels for people who are the only ones eligible to implement their knowledge. Six Sigma requires certified professionals and TQM can be handled by anyone with an intelligent mind. Six Sigma gets better results than TQM. Six Sigma is based on customer feedback. It is accurate and result-oriented. Experts think that Six Sigma will outshine TQM with time. Read the full article
0 notes
ayejayque · 2 years ago
Text
The Role of Managers & Customers in Total Quality Management
Tumblr media
Role of Managers in Total Quality Management TQM is a continuous effort by the management to advance and improve the processes and systems to make sure that the end result is superior-quality products. All businesses take care of their customers. Customer feedback is essential. TQM generates processes and systems on the basis of customer feedback and market research. This goes a long way in the development of an organization. Managers play an important role in Total Quality Management: Starting and applying TQM programs warrant a great amount of preparation and investigation. Managers must get trained in the many TQM practices before they implement them. TQM is a very expensive process. Huge costs are involved. The manager must allocate budgets for TQM at the commencement of each fiscal year. There is no use crying over spilled milk. Before you embark on the TQM journey, make sure that you read a lot about it. You must understand why quality is such an imperative parameter in all businesses. You must buy into the idea of TQM first. If you do not own the process, it would be extremely problematic for you to convince others to implement the same. Make sure that you understand your customers and the target market. You must make visits to new and existing as well as potential customers and understand their expectations of your business as well as your brand. Customer feedback is important in formulating TQM strategies. You as a manager need to work closely with the senior management, and HR professionals to develop guaranteed enactment strategies. A manager is a link between the top management and the whole workforce. The manager is a facilitator at the office. It is his duty to assist employees in applying TQM. The manager selects and appoints the right individuals who work as line managers. They take charge and control the entire project. These selected employees must be reliable and diligent. They should have the capability to take on, carry, and implement a critical project like TQM. The manager must assign the resources required for the successful implementation of TQM. He must assign time for the many training programs. The employees who give valuable suggestions, ideas, and tips need to be appreciated. The end goal is to deliver top-quality products. The manager must also train subordinates to ensure a seamless implementation of TQM, and that too without any hindrances. A manager communicates the benefits of TQM to all the workers of the organization. The employees must be called on a common platform. It is time to address them to apprise them of the benefits and importance of TQM. Give them an understanding of how TQM programs yield high-quality products. This is going to be beneficial to all the employees as well as the organization as a whole.  If as a manager, you train your own employees, you are bound to have better results. It is better than an external trainer who is brought in, and loaded with information. A trainer needs preparation for every question. Do your back-office work very meticulously. A manager is supposed to be a source of inspiration for the other workers. You must practice TQM beforehand, before expecting others to believe you. Customer feedback must be monitored and taken into contemplation when framing the company’s main strategies. Give periodic reports to the staff highlighting the scope of improvement. Role of Customers in TQM Businesses must emphasize the quality of their products to survive the fierce rivalry. In the business world of today, there is no shortage of competitors in the marketplace. Customers would not come in again if what was promised, is not delivered. You can fool some people sometimes, but cannot fool them all the time. Quality is a defining parameter for all performing businesses and should not be compromised with. The responsibility to deliver quality products and services lies on the shoulders of each worker of the organization. The business, as a collective effort needs improvement ideas to make foolproof systems and processes. This is done by the workers. The product quality must meet if not exceed the expectations of the customers. Customers play an important role in TQM What is the main difference between a successful and an unsuccessful business? A successful business is the one with many buyers in the market. There are other parameters too, but the customers are integral in determining the success or failure of a business. Business marketers must pay great attention to their end-users and what they expect from their business. Customer feedback should be frequently and sensibly monitored before framing any key business strategy. You cannot ignore your customers. They pay for your products. These products bring revenues to your organization and yield higher profits. Understand the needs and demands of the customers TQM ensures that employees know their target customers well. This must be done before making any changes to the processes and systems that are meant to deliver superior quality products for better customer satisfaction. In fact, businesses introduce TQM to grow their customer base and gain higher levels of customer satisfaction. TQM increases an organization’s loyal customers. These are people who would not go anywhere, irrespective of what happens. Businesses only survive on the strength of their clients. The quality of a product is defined by its durability, packaging, reliability, timely delivery, and also the customer’s overall experience with the business. A disgruntled customer leads to a loss of business. In the service industry, employees must handle and interact with customers with utmost care and professionalism. This ensures happy and loyal customers. Make it a point to design various feedback forms for the customers. This will enable them to share what they feel and think about your products & services. Sometimes, the feedback may favor your organization, but not be in favor of your business as a whole. Negative comments or feedback from customers should never be ignored. As a part of TQM, employees should be on the same page, brainstorm and explore ideas. This will generate concrete solutions that improve the systems and processes to deliver what the customer expects. TQM never helps if your customers are ignored. With physical products, customers are satisfied when the products are durable, reliable, easy to use, adaptable, and appropriate. In the service industry customers are satisfied only when employees are friendly and polite, they are honest and don’t make false promises, they are easy to approach, they listen & address customer grievances, and respond to customer requests timely. Read the full article
0 notes
ayejayque · 2 years ago
Text
The Models & Tools of Total Quality Management
Tumblr media
Total Quality Management Models Total Quality Management or TQM happens due to the tandem between the management and the employees. It enables workers to concentrate on quality and to go all-out to shine in whatever they do. According to TQM, customer feedback and expectations are very important when formulating and implementing new strategies. A lot of management gurus have contributed to the concept of TQM. They are Drucker, Juran, Deming, Ishikawa, Crosby, Feigenbaum, and many others. Many models of TQM exist. One glove doesn’t fit all. Organizations need to know which one they should select as per their environment and business practices. The models of total quality management: - Deming Application Prize - Malcolm Baldrige Criteria for Performance Distinction - European Foundation for Quality Management - ISO Quality Management Standards TQM starts with understanding customers, their needs, and their expectations. The customer data must be collected meticulously. Data collection methods should be foolproof. This helps to understand the target customers and their behavior. Business marketers need to know the demographics of the customers along with their needs and expectations. They must fully know their products in terms of customer needs and demands. TQM demands meticulous planning and research. It integrates customer feedback with relevant information. This helps to plan and design effective strategies to achieve high-quality goods. These strategies must be evaluated and reviewed periodically. Customers are satisfied when products meet their expectations and are good value for their money. Their experience with the organization must be pleasant. This ensures repeat business. Nonstop improvements and modifications in the current processes are necessary to yield higher profits. Processes can’t be the same forever.  If a customer complains, find out the cause of the problem. Rectify the problem by using the correct TQM model. This ensures a high-quality product. Without the contribution of each and every employee, TQM would fail. Quality Management Tools Quality Management is used to collect and analyze data to comprehend and infer information. These models require widespread planning and gathering pertinent information about the end-users. Customer feedback and expectations need to be observed and assessed to deliver top-end products. Quality Management tools help to identify problems that are occurring repeatedly along with their root causes. Quality Management tools improve the products and services. Workers can easily collect the data as well as organize it. This results in a good analysis of the same. Subsequently, concrete solutions are devised for better-quality products. The following are QM tools: - Checklist - Pareto Chart - Fishbone Diagram - Histogram - Scatter Diagram - Graphs What is Kaizen? – The 5 S of Kaizen “Kaizen” is a Japanese word meaning “improvement” or “a good change.” Kaizen is a continuous effort by each and every employee to improve all processes and systems of a business. If you ever work for a Japanese company, you will understand the importance that they give to Kaizen. Kaizen helps Japanese companies to outshine their rivals. They religiously adhere to a certain set of policies and rules to eliminate defects and ensure superior quality thereby gaining customer satisfaction. Kaizen works on the following principle. “Change is for the better.”Kaizen is the “continuous improvement of processes and functions through change”. It brings continuous small improvements in the overall processes. The Japanese feel that small continuous changes in the systems and policies are better than Major changes. It aims at continuous improvement of processes in manufacturing and all other departments. Kaizen is a collective responsibility of all who are linked with the organization. Every individual needs to contribute through small improvements & changes in the system. Five S of Kaizen The “Five S” is a systematic approach for foolproof systems, standard policies, rules, and regulations for a healthy work culture. You would hardly find a Japanese worker unhappy or dissatisfied. Japanese employees never foul mouth about their organization. Kaizen makes a well-organized workplace. This improves productivity and yields better results. It nurtures employees to feel connected to the organization. Let us know the five S. They are:   - SEIRI –This means “Sort Out”. Workers are expected to sort out their things. Label the items as “Necessary,” Critical,” “Most Important,” “Not important now”, “Useless” and so on. Get rid of all useless stuff. Keep aside what is not needed at present. Critical stuff must be kept safe. - SEITION –This means “Organize”. Workers waste precious time searching for items and important documents. Every item should have a designated place. - SEISO –This means “Shine the workplace”. The workstation should be clutter-free and clean. Filing should be done properly. Drawers and cabinets should store your organized items. - SEIKETSU-SEIKETSU – This means “Standardization”. All businesses have rules, procedures, & policies to ensure superior quality. - SHITSUKE –This means “Self-discipline”. Never work in casuals. Follow all procedures and always carry your identity cards. It gives you a sense of pride and shows your attachment to the organization. Read the full article
0 notes
ayejayque · 2 years ago
Text
The Role of Logistics in Supply Chain Management
Tumblr media
Supply Chain Management includes planning, design, control, and enactment of business processes concerned with procurement, manufacturing, distribution, and sales functions. Many vendors are involved. The SCM experts channel these vendors and service providers to move goods and materials from assorted locations all over the world. Supply Chains are driven by logistics. These could either be by road, rail, air, or ship. In-between warehouses hold inventories before moving to the next locations. Multi-tier suppliers, agents, freight forwarders, packers, customs departments, distributors, and logistics service providers, run the show in tandem. Logistics are the Nucleus of Supply Chain Management. Supply chain is often referred to as Logistics and vice-versa. Logistics and supply chains are intricately linked but different. Logistics is a sub-set of Supply Chain. Supply chain designs and details, the procurement, manufacturing, distribution network, and strategy for finished goods.  In production procurement, SCM strategy defines the process, vendors, procurement, and mode of order fulfillment. In Finished Goods distribution, SCM strategy defines network design for stock holding and other channels of distribution. Logistics in Supply Chain Third Party Logistics (3PL) Service Providers both at global & local levels form major partners to manage & offer Supply Chain services. Internet and IT technology help manage information & data ahead or along the flow of materials and goods. Logistics Service Providers keep the Supply Chain in Motion Procurement Logistics, Manufacturing Logistics, & Finished Goods Logistics functions are managed by independent departments.  These departments have some common activities but the nature of logistics functions is specific to each of them. With the advent of 3PL (third-party logistics), the companies outsource all supply chain components and logistics noncore functions to their providers. In any Logistics Contract or Supply Chain Network, one single service provider cannot manage the whole chain of activities. A lead logistics service provider will manage other service providers. This gives the client a single window service. Components of logistics get outsourced by these service providers to external contractors. Labor, Yard Management, and Fleet Management, are often outsourced. Logistics works with the 4PL providers who undertake large projects with huge volumes, multiple locations, and services as lead service providers. They make operating plans, requirements, and specifications for the services. They provide the entire range of logistics services to the client. Usually, Freight Forwarders, Transporters, and Warehousing Service Providers are the face of logistics. Freight forwarders consolidate and book the cargo for onward freight using an airline, or shipping line, or railways when required. Freight forwarders do not own transportation services. They book the space with shipping lines and negotiate the freight. They offer origin and destination services with a single window. They have a customs clearance division to support ground logistics too. International Logistics 80% of Global trade is through the sea. Air routes are expensive and used only when the cargo is light in weight. Shipping companies own vessels and specialize in the transportation of cargo of all kinds. They have mother vessels deployed on major routes. These are bigger vessels with higher capacity for cargo. The detailed schedules are announced in advance for each vessel. The feeder vessels carry cargo to be transshipped onto the mother vessel. We see shipments being made in FCL Containers. FCL stands for Full Container Load. FCLs come as 20-foot and 40-foot containers. They have fixed dimensions and weight-carrying capacity. Reverse Logistics There is another extension to the Supply Chain Process called Reverse Logistics. This deals with the return of products from the clients to the company as warranty returns or even unsold inventory. Green Logistics initiative has a detailed process for the suppliers and manufacturers to adapt color coding systems to find the kinds of waste reusable - recyclable, green waste, and the like. Awareness has forced companies to adopt standards & measures to make sure recycling and e-waste minimize environmental impacts & reduce scrap and the complete recovery of waste materials. The Automotive and Computer industries have successful reverse logistics practices. They use this as a marketing strategy to project the company’s CSR vis-à-vis waste management. They also donate funds from scrap disposal and recycling. Reverse Logistics & SCM Supply Chain Management is about the flow of raw materials and finished goods. It also is about the reverse flow of unsold finished goods, parts, and packaging materials from the customer to the organization. Reverse logistics are used by automotive, electronics, retail, and publishers in a very big way. Businesses can recall defective goods and packaging and defective materials, when recycled generate value for the company. Also, unsold and outdated goods are collected by the business. Businesses that take care of waste and hazardous materials have a good reputation in the eyes of society. Reverse logistics are also an extension of the marketing strategy. The reverse logistics process is as intricate as a normal supply chain and demands the same focus and multiple logistics partners. Contract Logistics - Important chunk in Supply Chain Management Supply Chain Activities constitute multi-modal activities in one or more locations in the network. These activities may either be local or global. In the pharma industry, the steep manufacturing cost has forced businesses to shift manufacturing out of Europe and the USA to cheaper nations. Finished goods are dispatched from the plant directly to the designated distribution center in the country or abroad. The distribution center manages the inventory and completes all in-house processes. The highest level of inventory is help at the warehouse that holds supplier parts. Shipments in the pipeline are very small compared to the warehouse inventory. Warehouses are critical to supply chain networks. Warehouses are the focal points in the supply chain. Their location and functioning affect the rest of the supply chain efficiency. Distribution Centers, VMI Centers, Parts Centers, and various warehousing activities are now outsourced to 3PL service providers. Many companies manage this in-house. Businesses have invested in building Contract Logistics capabilities. Cases, where the requirement is more than a warehouse, a known as Contract Logistics. Contract Logistics RFQ Process Outsourcing is of two kinds. One kind is the flow through warehouses, merging, and distribution centers. The second kind is the larger distribution centers managing finished goods inventory and related operations. These operations are critical in nature and are categorized by the volume & value of inventory held, the size of operations, and their importance in the supply chain network. RFQ should contain the following complete details. - Detailed business requirements, service specifications, IT processes, and products. - Project scope should cover activities, volumes, and IT infrastructure, etc. - List service level expected with metrics for all operations. - RFQ dates for vendor meets, presentations and bid submissions. - Project span mentioning vendor, criteria, process of selection, and project implementation. - Contract period and extensions. - Pricing and Costing. - Legal and statutory compliance needs. - Inventory liability, third-party liability, and insurance. - Contract and agreement terms along with a draft. - Any specific requirement of the buyer. - Confidentiality agreements, NDAs, if any. RFQ Process RFQ is prepared by the business function manager. Once the document is prepared, procurement issues the RFQ to all vendors in the market. The buyer invites all participants for a Q&A session. Usually, the answers are posted to all vendors participating. This ensures a fair chance for all. The vendors can meet and discuss the scope in detail. The buyer facilitates clarifications and discussions with all business functional groups as required by the vendors. On the date of the RFQ, vendors submit a bid response document. Shortlisting happens internally via related functional teams & project leaders. Response documents are studied, graded, and tabulated. Shortlisted vendors are given an opportunity to present the solution to the buyer’s team on the given dates. The final selection happens internally. Procurement negotiates with the selected vendor and comes to agreeable terms and conditions. The internal management approval is obtained from the concerned. The selected vendor is announced, and LOI is given to the finalized vendor. Types of Pricing Models in Contract Logistics: - Fee-based on the Sales Turnover or volume. - Cost Plus model - total cost of running operations and profit as a management Fee as a percentage of the total cost. - Price per Square Foot - Transaction and fixed price combination - Cost per transaction Contract Logistics Costs Warehousing Costing depends on the business models. Some use common shared facilities, some use stand-alone facilities. Some of the usual costs are as under: - Land & building - Infrastructure - IT infrastructure - Manpower - Utilities & consumables - Administrative expenses - Overheads - Profit/management fee Contract Logistics Solution Design Document In any project of outsourcing of warehousing operations, setting up a Distribution Center requires detailed attention & work from both parties. The buyer details the RFQ document, and the 3PL service provider works on a detailed solution. The RFQ response shows the proposed solution and the vendors' understanding. A response to an RFQ will contain the following sections: - Covering Letter with enclosures - Solution design - Testimonials with photos or video if possible. - Company Profile, Management Structure, and financial information - Project Implementation including project team, project sponsor, management team, proposed timelines, schedule, etc. - Pricing - Terms and Conditions - Any deviations from RFQ assumptions along with justification - Any other information regarding the solution design. Solution Design Document The solution Design Document details the solution proposed matching the requirements. This is driven by the Business Development team along with the solution design team. Once the solution design is ready, it is reviewed by the operations team, costing team, and IT. It is then sent for internal acceptance and approval before submission to the client. Read the full article
0 notes
ayejayque · 2 years ago
Text
Closing Project, Recording Results, Knowledge Transfer, & Handing Over in Six Sigma
Tumblr media
Closing a Six Sigma Project Six Sigma is an intricate methodology. It requires time to learn and implement. Innumerable organizations have renovated their operations on the strength of their Six Sigma proficiencies. Six Sigma's efficient processes are a prerequisite to surviving the rivalry at present. People think that the formalities involved in closing a Six Sigma project are many and useless. In most organizations, Six Sigma is a philosophy and a way of life. We will establish that the closing formalities do add value to the subsequent projects. Why Work on Closing a Project? Here is why: Positive Lessons: Each project brings some new lessons. There are many positive lessons. This could be newer technology, determining the drivers of a process, and so on. Adding the positive lessons and preserving them in the collective knowledge pool of the organization surely helps in the long run. Negative Lessons: There are many matters and trials faced during the project. Sometimes projects get delayed because of unforeseen hurdles. Over time, teams realize that some things work and others don’t. Such knowledge is experience-based and cannot be found elsewhere. Are All Stakeholders' Hopes Being Met? Six Sigma projects have many stakeholders. In the end, it is important that everyone feels that their needs have been met by the project team. No one should get the raw end of the deal. This is why closing Six Sigma projects needs discussions with stakeholders. Finally, a large number of signoffs are required. Is the Knowledge Preserved in the System? Documenting the lessons learned makes things easier for the future generations of project teams. They won’t have to reinvent the wheel. If they face a similar problem, they can just look up the solution in the collective knowledge pool and start working from there. Recording the Results of a Six Sigma Project Closing a project is an intricate exercise. The documentation of the project is extremely important. Here is a list of the activities that need to be performed while formally closing a Six Sigma project. Ensuring that the Objectives have been Met The most understandable thing to do when closing a Six Sigma project is to ensure that the objectives of the project have been satisfactorily met. Ensuring that the Results are Standardized Six Sigma projects build process capability. The obtained results need to be standardized. This ensures that the results do not depend on the skill, knowledge, or expertise of anyone. All can work independently on it. Ensuring that the Results are Error Proofed The Six Sigma team lists down scenarios where things can go wrong & works towards preventing them from happening. The plans should be documented with the process owner. The process owner removes all errors from them. Experts must be appointed to solve all problems that the Six Sigma team can forecast. Ensuring that the Knowledge is Documented Every Six Sigma project brings with it, a wealth of knowledge. This knowledge may pertain to anything about the process in question. This needs to be explicitly documented and stored in the collective knowledge repository of the organization for future use. Handing off the Project to the Process Owner When the control charts have been created and deployed, the control phase of the DMAIC methodology comes to a close. The solution that the team requires has now been recognized. It is time to transition the process from being a project to being regular operations of the business. This is done through the Six Sigma Transition Methodology. The details are as follows: What Does a Project Hand-off Signify? A project handoff means the return to business as usual. When the plan is implemented, the responsibility of the project is transferred to the process owner from the Six Sigma Team. This transition plan marks the shift in accountability too. Steps to Successfully Handoff Projects Here are a few steps that every Six Sigma project team must follow to ensure that the process has been successfully transferred. Document the SOPs: The Six Sigma process is expected to write down the standard operating procedures. It is important that documentation be done to avoid communication errors. Documentation also ensures that the knowledge stays within the system. Document the Risks:  This is done by explaining the Failure Mode and Effects Analysis conducted to the process owner and the plans that have been created to lessen the risks involved. The process owner may recommend variations and modifications in the process. Appoint Experts: The Six Sigma team must also explain the mitigation plan. Experts must be appointed. The process owner must know exactly which expert to contact and when. This must be documented in the process too. Get The Signoffs: After everything has been done, it is time for signoffs. These signoffs mean that the Six Sigma team has achieved what it had set out to do. The process owner endorses this fact! Conducting the Knowledge Transfer The Six Sigma project is not complete until the knowledge transfer. Many Six Sigma projects have failed because the project team was overconfident and overjoyed at achieving success. These papers mean nothing until the improvements are implemented. A Six Sigma project may not be implemented with the same zeal too. Conducting knowledge transfer is a central part of the Six Sigma project. It must be done carefully. Some advice on it is given below: Key Points to Take into Account while Conducting Knowledge Transfer: Chronology: The knowledge transfer must be iterative and chronological. The first step is to train the senior & middle-level management. After this, the executives and the supervisors must be trained. The upper management must be involved in the exercise. When the upper management trains the lower staff, they refresh their training. This is important because the lower staff will first tell their immediate management if something goes wrong. Hence the upper management must know how the new process works. Buy-In: The Six Sigma project team must create a buy-in for the solution. Sometimes, workers feel that the improved methods are designed to eliminate their jobs. The idea is to design to assist the workers in their jobs. This must be communicated to them. Budget: Before the training, the Six Sigma project team must give the process owner a budget for it. This budget must have the resources required. Training must be held on a rotation basis so that normal working is not disrupted. Knowledge Transfer Regarding Technology: Workers must be comfortable with the new technology that will be used. The training team must see that the technological innovation is well understood and properly applied. Best Practices: Workers must be told about the Standard Operating Procedures. They may get confused between the old ways and the new ways that have been put into place. Reporting: Changes in the process demand a change in the reporting structure. Workers must know who they are reporting to and who is responsible. Evaluating the Financial Benefits of the Six Sigma Project Because many conmen consultants have surfaced recently promising unbelievable benefits from Six Sigma projects, checks are now imposed to ensure their financial viability. Contemporary Issues in Financial Evaluation Projected Numbers Never Materialize: When the Six Sigma project ends, the Finance Certifier issues the PNV of the project. This is the Projected Net Value of the project. It is different from the Net Present Value (NPV). All calculations are hypothetical in nature. Everything is assumed. Past projects have shown that the reality is different. Sometimes, the projected numbers were not realized. Calculations are Off-base: Sometimes the numbers used to show the financial viability of the project are off-base. It was assumed that market shares would increase but they did not. The consultants accounted for gains that were beyond the control of the business. Bonuses are paid on Projections: Sometimes, bonuses were paid to executives on the basis of fudged numbers. Executives had an interest in inflating the financial viability of the project. It was misleading the strategic initiative of the organization involved. Solutions to Contemporary Issues Many organizations have become vigilant about what they spend on Six Sigma initiatives and what is the ROI. Here are some of the initiatives taken: Follow-Up Review: A project is not considered a success on completion. Reviews are set up every six months. Standardized Calculation Policy: Having this in place has eliminated the problem of fudged numbers to a large extent. Personal Accountability: A large part of the bonus is linked to the realization of gains. Bonus is announced at one go, but paid over a period of time. This is only if reviews show the project is on track.     Read the full article
0 notes
ayejayque · 2 years ago
Text
The Steps of Creating & Analyzing Control Charts
Tumblr media
Creating and Analyzing Control Charts Once the suitable control chart type has been selected, it is time to create and monitor it. This helps the process as soon as something goes off-track. The best practices to create and analyze control charts are given below. Non-Human Measurements: Control charts must use non-human measurements. The measurements must be automated and fed into the system. Human intervention includes bias and must be avoided. Software: Most software has a control chart feature in it. These can be picked up from the shelf and implemented. This will make your job easier. Real-Time: Control charts must be created in real-time. This gives the management a chance to invoke corrective action as soon as an inconsistency is detected. Creating control charts is not difficult. Keeping them populated, analyzing them frequently, and finding trends for out-of-control processes. The Six Sigma methodology is advanced and so is the software. One needs to know the concepts involved. The difficulty is in automation. Mere average managers can use these tools without hassles. Purpose: Control charts are created to segregate special cause variation from normal variation. Special cause variation is the out-of-control process. It has to be detected early and singled out. This will make the process more efficient. Here is how to detect an out-of-control process. Out-of-Control Limits: In control charts, the points that lie above or below the specification limits show an out-of-control process. This warrants corrective action. Non-Random Patterns: Special cause variation exists when all points lie within the control limits. All points must lie within control limits. If the points show a pattern, then the management must understand the pattern. They must take necessary actions because the pattern could be due to special cause variation trying to make the process go haywire. Step 1: Selecting the Correct Variables to Monitor Six Sigma is based on measurement, comparisons, and setting corrective action plans. The variables (metrics) being measured should be appropriate. Wrong measurements result in wrong decisions. These are expensive and a waste of time and resources. There are two types of variables being measured - the primary metric and the secondary metric. The primary metric is to be checked for improvement. The secondary variable must be checked to ensure that it does not deteriorate. Both inputs and outputs are to be measured. Characteristics of Variables - Must display the state of affairs - Must be critical to the process - Must be easy to measure - Must produce correct measurements - Must be measurable over a period of time The correct variables ensure a good control plan. The procedure may look simple. Wrong variables can have an adverse impact on your project, including making it a failure. Step 2: Select the Correct Sampling Plan Once the metrics are chosen, we establish the correct sampling plan. In a large organization, there are millions of instances of each variable over time. One cannot map all for establishing control. A sample is chosen & the analysis takes place on that. Concept of Rational Subgrouping To understand this, we must understand that there are variations within the given sample subgroup and between the sample subgroups too. In Six Sigma, we look for differences between the sample subgroups. These variations within the same sample subgroup must be diminished. How to Create a Rational Subgroup Rational subgrouping is all too obvious and deceptively simple. Yet have issues implementing it. Sample subgroups must be homogenous within themselves. Size: The size of the sample must be small enough to facilitate easy collection and quick analysis, It should be large enough to be representative of the population. Frequency: The measurements of the elements of a rational subgroup must be recorded at the same frequency of time. This negates the external factors and makes everything homogenous. Composition: Different subgroups must not be mixed together. The subgroup must be homogenous.   Step 3A: Choosing the Correct Control Chart (Discrete Data) If the data type is discrete, then it must fall between one of the binary or count types. In binary, there are only two possibilities, success and failure, defective and not defective. In count type distribution there may be more than two possibilities, and the defects need to be counted. The difference is subtle. With fixed rules governing when and which chart should be used, the confusion is reduced. Count Equal Subgroup Size: Here, the Six Sigma process methodology suggests the C Chart. The C Chart counts the defects that are occurring per unit of unit. This could either be per minute, per hour, per day, or per week. Because the time period is fixed, so is the sample size. The C Chart shows how many observations from a sample failed to meet the criteria specified. Unequal Subgroup Size: Here, the Six Sigma process methodology recommends the “U” chart. The U chart counts the rate of defects. It monitors the number of units and how many have failed the given criteria. The U Chart will tell you that 5/1253 units have failed to comply. It needs no fixed time period or a fixed sample size. It is more convenient to use. Binary Equal Subgroup Size: Here, the Six Sigma process recommends the NP Chart. The NP chart counts the number of defects per period of time, similar to the C Chart. There is a difference between the two. C Chart is used when defects are rare. An NP chart uses binomial distribution. The defect occurrences do not have to be rare. If more than 5% of defects occur in a process, the NP chart must be used. Unequal Subgroup Size: Here, the P chart is recommended. It is similar to the U chart. This calculates the defects as a percentage. It takes into account the units in the process just like the U Chart. The U chart uses the Binomial distribution and is used when the defects are not rare. Step 3B: Choosing the Correct Control Chart (Continuous Data) Many types of continuous control charts are also available. The Six Sigma methodology prescribes which chart must be used and when. The prescription in the case of continuous data points is based on the size of the sample. The following could be used: Individual Chart: Here, each observation is plotted as a separate data point. There is no rational subgrouping of data. This chart must be used when the sample size is 1. Moving Range Charts: The moving range chart plots the difference between two consecutive data points. The sample size of this type of control chart is 2. The rational subgrouping of data points is if they are consecutive. The moving range data charts have one less data point compared to the individual charts. X Bar R Chart: The X bar and R chart control a process when the sample size is small and constant. The X bar and R Chart are two different charts. They should be seen in tandem to know the behavior of a process. The X bar chart shows the mean performance of the process. The R chart shows the difference between the smallest and largest values and explains the variability of the process. X Bar S Chart: The X bar and S charts are employed when the sample size is large and/or variable. An X-bar chart ensures that the mean of the process is in control. The S chart monitors the standard deviation. When used together, they help monitor a very large process with ease. Non-Random Patterns Cyclical Pattern: A cyclical pattern is a predictive situation in which data points increase and decrease the process mean repeatedly.  In control charts, cyclical patterns signify special cause variation. They are not random. Cyclical patterns may emerge due to: Operator Fatigue: The most common cause is operator fatigue. This hints at incorrect job design. An operator is unable to work at the same levels throughout the day. Production Equipment: Wear and tear of the production equipment is another cause. The voltage and power also fluctuate in predictable non-random patterns. Trend Pattern: A trend pattern is a situation where the data points lie between the specification lines drawn on the control chart. It displays a specific trend. A trend is the movement of seven consecutive points. The direction could either increase or decrease. Because of non-random patterns, the following need to be done. - Learning Curve:Workers learn by doing it. Over time, they become experts in what they perform. They take less and less time to do it. This is shown in the control charts as a trend. Over time, cycle times will fall. Productivity will increase. The management needs to train employees so that they are already fairly acquainted with the task when they come to work. - Noise Factors:These are the disturbances in the process. When volume increases, the average time taken by a process is likely to go up. With the increase in volume, workers feel overwhelmed with the tasks they handle and communication becomes difficult. This will show up as a trend. - Shift in the Process:A shift in the process is a pattern in which seven consecutive data points appear on one side of the mean. A shift in the process is vital for the management to notice. It shows that the average of the process has moved to a different level. The causes of the appearance of non-random shift patterns are as follows: - Process improvement - Introduction of new inputs - Change in Quality Control measures Two out of three points at Two Sigma or Beyond The logic behind this pattern is interesting. The normal curve says that 95% of data points will lie between 2 standard deviations. But, if 2 out of 3 consecutive points lie on or above the standard deviation, we can safely say that 95% criteria are not being met. This is an early warning sign to the management that the process is about to go out of control. A considerate management will make remedial action plans, look into the process, recognize the issue, and correct it as soon as possible. Read the full article
0 notes
ayejayque · 2 years ago
Text
The Importance of Relationships, the Predictive Equation, and Inputs in Six Sigma
Tumblr media
.What is Correlation Analysis? Let us understand correlation analysis with the help of an example. The management of a factory has data that says that as the shift time increases, productivity decreases. This is just a mere observation based on raw data. The Six Sigma methodology is based on objective facts and not on people’s opinions. Correlation analysis will help confirm this fact if it is true. How is Correlation Analysis Performed? For correlation analysis, sufficient data must be present for the variables. This data is then put into a formula by Karl Pearson. This involves complex calculations and mandates a statistician in the Six Sigma team. These days all calculations are performed by computers. The humans only add data & interpret the results. How to Interpret the Data of Correlation Analysis? A correlation analysis has a result that lies between +1 and -1. The +ve or –ve sign means the direction of the correlation. The positive sign denotes a direct correlation. The negative sign denotes an inverse correlation. Zero means no correlation. Understanding that Correlation Does Not Imply Causation Correlation analysis confirms that some given data moves in tandem. With correlation analysis, it is impossible to say which variable is the cause and which is the effect. It is also probable that both move in tandem and a third common variable affects them. Usually, the variables have a correlation. It is enough to take pertinent action. Types of Relationships between the Input and Output The scatter plot can be a useful tool to know the type of relationship between the inputs and the outputs. - No Relationship:The scatter plot can give an obvious suggestion if the inputs and outputs on the graph are not related. - Linear and Non-Linear:A linear correlation exists when all points are plotted close together. They form a distinct line. - Positive and Negative:A positive relationship is one where more input leads to more output. This is a direct relationship. A negative relationship is where more input leads to less output. This is an inverse relationship. - Strong and Weak:The strength of the correlation is based on how closely the data fits the shape. Developing the Predictive Equation Devising an equation that shows a precise relationship is called Regression. Regression summarizes the relationships in the Scatter Plot as an equation. How is Regression Used? In modern Six Sigma projects, the regression equation computation is computerized. The personnel do not understand the details that go with it. They must know the many types of regression equations. They are: - Linear Regression: There is only one input variable and one output variable in linear regression. The input variable is independent and the output variable is dependent. - Multi-linear Regression: There are multiple input variables and only one output variable. These equations are difficult to create. They check the combined effect that multiple variables have on the output. - Non-Linear Regression: These are complex equations. They are rare. Application of the Predictive Equation The Six Sigma team now knows about the effect that the input variables have on the output. They can see if the effect is noteworthy when compared to other variables. They can also know the correct input that needs to be maintained if an output is to be achieved. Recording the Inputs (Xs) at the Actionable Level The 5 whys method or the Root Cause analysis method plays an important role in determining that the Xs are at an actionable level. 5 Whys Method This is a non-statistical method to convert vital inputs to their actionable level. It does not need mathematical testing. It is based on brainstorming the reasons and digging deeper and deeper. Procedure - Understand Xs and Ys - Brainstorm - Analyze the responses - Iterate The 5 are mere abstractions. The actual number of iterations required will be less or more. Usually, actionable levels are reached within 5 iterations. The 5 whys analysis for finding actionable X’s is a combination of the root cause analysis and brainstorming. Instead of expert knowledge, the collective knowledge of the team is used here. Ensuring that the Inputs (Xs) are recorded at the Actionable Level A hypothesis tests a relationship between two variables. The next step is to know if this information is actionable. This is the objective of the analyze phase. If Xs are actionable, it will be important if the analysis meets the tollgate deliverables set for the phase. What is an Actionable X? Consider that a difference exists between two variables. Assume that the difference is significant. Let’s take a comparison of one factory with another. Could the location be the difference? We only know that their outputs are different. There could be a difference in management, a difference in worker skills, a variation in machinery, location, and countless other factors. The hypothesis is the first stage in this scenario. It has given us a clue about the existing variation. It needs to be explored till all ambiguities are removed and an action plan is developed to eliminate the variation. This action plan is the actionable X. Subjectivity of the Actionable X We need to understand that deeper and deeper analysis is possible in the process. The Six Sigma project team will have to take a call when they have information to develop an action plan. The actionable X as a concept is vague. It depends on the experience and maturity of the team. The Project Champion or the Project lead usually takes a call if the Xs are enough for action. Should the Actionable Xs be Pursued for all Inputs? Converting the Xs into actionable Xs may require time and resources. All Xs need not be recorded at an actionable level. This is the call of the Six Sigma team. Sometimes, only the vital few Xs show a significant effect on the process. These must be recorded at an actionable level. Read the full article
0 notes
ayejayque · 2 years ago
Text
Hypothesis Testing, & Understanding it in Light of Six Sigma
Tumblr media
What is Hypothesis Testing? Hypothesis testing is a statistical method for confirming the effect that critical inputs have on the outputs. Hypothesis testing is used when the inputs are measured in isolation. The outputs may either be discrete or continuous. The inputs have to be discrete. If the inputs are continuous then correlation and regression testing is used. The fundamentals of framing a hypothesis are: Null and the Alternate Hypothesis Any testing has a null and an alternate hypothesis. The null testing shows no relation between the samples. An alternate test accepts a relationship. Hypothesis testing considers both possibilities. It reaches a decision as to which of the two is valid. The Null Hypothesis Null means zero. It implies no relationship in the variable parameters that are to be measured. The null hypothesis assumes all variables to be alike. The Alternate Hypothesis The alternate hypothesis opposes the null hypothesis. We never use the alternate hypothesis. When we reject the null hypothesis, the alternate hypothesis automatically gets validated. The various types of hypotheses are: Directional: A directional alternate hypothesis states the relationship between the variables. Non-Directional: A non-directional hypothesis states that there is a significant difference between the samples being measured. One must know if the alternate hypothesis be written in the directional or non-directional form. This is the most important role for the Six Sigma practitioner in the analyze phase. There are tools that automatically solve the problem, provided that they have been correctly formulated. Understanding the Confidence Interval The confidence interval is an integral concept of hypothesis testing. An overview of the confidence interval is essential for the Six Sigma team to understand. It is based on the following: Point Estimates v/s Interval Estimates The normal distribution is continuous. The probability of reaching the exact point is nil. This has implications for estimation. Instead of reaching a point estimate, one needs to write an interval estimate to get an answer. We cannot say the exact value. We can always make a fairly educated guess if the value will lie between 80 and 120. What is Confidence Interval? If we say that a value will lie between 80 and 120 with 90% confidence, we are saying that there is a 9 out of 10 chance that this will be so. This is true only after a large number of experiments have been conducted. The confidence interval is a result of sampling. So, we can fairly conclude that in a sample, 90% of the total observations will have a value between 80 and 120. The factors that influence the confidence interval have been listed below: Sample Size: The confidence level increases when the sample size increases. When the sample size grows, there is more evidence. The sample becomes closer to the population. More data means fewer sampling errors. Sample Variation: The confidence interval would get larger as the sample variation is reduced. With homogenous samples, one is more confident about one’s predictions. Relationship to Hypothesis Testing Hypothesis testing is done on samples. The values drawn from the samples and the actual value of the population might differ. This is sampling error. Hypothesis tests with higher confidence levels are more accurate than ones with lower confidence levels. Selecting the Correct Hypothesis Test The Six Sigma project team is expected to apply these tests to uncover facts. These will then be used as a basis for decisions. The most basic decision when conducting a hypothesis test is what type of hypothesis test should be conducted. One-Tailed vs. Two-Tailed: Tests are of two types. One-tailed or two-tailed. If we compare the differences between the Average Handling Times (AHTs) of two different call centers, a two-tailed test will see any significant statistical difference in the samples. In a two-tailed test, we see differences arising on both sides. In a one-tailed test, we must choose between an upper-tailed or lower-tailed test. The upper-tailed test checks if one of the samples is higher than the other. If the sample has a lower value, the null hypothesis will be nominated and no variance will be shown. The lower-tailed test is the exact opposite of this. Decision Criteria The three simple decision criteria for the selection of the correct hypothesis test are: - The number of groups being tested - If the Ys are discrete or continuous - Population parameter being compared The Possibility of a Mistake Hypothesis testing is statistical analysis. It depends upon the data gathered. The hypothesis testing method takes all possible mistakes into account. The Six Sigma practitioner has to choose between the two possible error types that could occur. It is a trade-off, protection from one error, exposes you to the other. Sometimes, one error is better than the other. The Two Types of Errors: Alpha Error Alpha error is the risk of affirming that there is an input that is a part of the vital few inputs. The reality is otherwise.   Beta Error Beta error is the risk of affirming that there is an input that is not a part of the vital few inputs when in reality it is. Choosing which error to make Both Alpha and Beta errors are interrelated. Trying to avoid one leads to the other. Six Sigma practitioners must understand both the mistakes and choose the one they would rather make. We cannot be perfect. Instead, we will find a way to be less erroneous. What is the P-value? The P-value is a statistical representation of the possibility that the null hypothesis is true. The P-value is the probability that the output (Y) will not change due to the variation that we are intentionally introducing in the input (X). Decide the Relevant P Value The P-value is a central part of the hypothesis problem. A slight change in the P-Values changes which hypothesis is selected and/or rejected. The P-value must be carefully selected. Errors occur with the wrong P-value. The management must decide on the lesser evil prior to selecting the P-value. The link between the P-value and Confidence Interval The P-value and confidence interval are knotted. In fact, if you have the value of one you can derive the value of the other. What the Conclusion Does Not Mean? Statistical conclusions should not be taken literally. One needs to understand how to interpret them prior to any decision-making.   Read the full article
0 notes
ayejayque · 2 years ago
Text
Measurement, Accuracy, Precision, Linearity & Resolution in Six Sigma
Tumblr media
Understanding Measurement Error Let us see what makes measurement systems fallible. The term for this is measurement error. Let us see this very important tool in the toolkit of any Six Sigma practitioner. Decisions are Based on Variation Data: Six Sigma does not like variation. Variation is considered the enemy of the process and must be removed. All decisions of process modification are taken on the basis of observed variation. Observed variation doesn’t represent the real picture! Break-up of the Variation Data: This is the formula that describes measurement error: Observed Variation = Actual Process Variation + Measurement Error Measurement System Analysis is used to understand the different variations that the process exhibits at different stages. Once it has been understood, it can be detached from the observed variation data. This is the actual process variation, on which effective decisions must be based on. Charting the Correct Data: Sometimes, the measurement error is small and insignificant. The Six Sigma team and the process managers see data from multiple variables listed in their control rooms. These small errors multiply themselves into big errors. A system should exist in which actual process variation is charted in front of the managers. This is not observed variation. This will allow for decision-making that is sound and prudent. May Arise Because of People or Instruments Measurement error is not because of a single source. This makes understanding measurement variation a complex task. Measurement variation establishes itself in the incorrect readings by the people and the instruments too. An effective measurement system must be devoid of these errors. Importance of Measurement Systems Analysis Measurement Systems analysis is an important part of the Six Sigma project. This is a compulsory process. Good results are not possible without it. Six Sigma is Data Driven: Data drives the entire Six Sigma philosophy. It proposes basing results on measurable facts as against subjective facts. Data is food for Six Sigma which makes it the powerful tool that it is. Measurements Form the Core of This Data: The facts used to reach conclusions and adopt changes are called measurements. What gets measured gets managed. Measurements are Not Always Accurate: There is no guarantee that measurements are always precise. All measurements have some kind of error. In Six Sigma, it is important to quantify the error. This allows the management to take action accordingly. Wrong Measurements Can be Wasteful: Faulty measurement systems derail process improvement initiatives. They mislead managers to make wrong decisions. They abandon successful initiatives or stick with unsuccessful ones. Six Sigma projects focus on error elimination. They ensure that there are no faults and that their tools will lead them in the correct direction. Causes of Measurement Variation A measurement system validates the measurements before they are considered factual data and used for any decision-making. A Six Sigma practitioner knows that the measurement system is not as good as you think it is. People who do the measurement system analysis find measurements that are 30% or more off the mark. The causes that lead to such widespread variations are given below: People: When people are involved, measurement system variation occurs. If measurements are taken repetitively, it’s a dull task and prone to errors. Sometimes, employees avoid work and fudge the measurement numbers. Measurements related to process efficiency must be collected automatically without human intervention. Equipment: Faulty equipment results in inaccurate measurements. We see worn-out and uncalibrated machines. Maintenance and calibration of the machines is absolutely essential. If measurements are critical to the process, the latest measurement systems need to be put up. One must measure in the best possible way. Computational Complexity: This is linked to the human cause of measurement systems error. This error is not due to negligence or manipulation. Sometimes, metrics are designed and this is not clearly communicated. This is why we get faulty software tools and operators with improper knowledge. Lack of Standard Procedures: The important metrics of the measurement systems should be defined. They should be communicated to every person who needs to know it. The definition must be reached by consensus and the numerical value of the metric must be stated in a manual. Without this, people will have different notions about the same metric. Accuracy vs. Precision The words accuracy and precision are used almost interchangeably in conversational usage. In measurement system analysis, their meaning, interpretation, and usage are widely different. These characteristics form a part of a good measurement system. Accuracy A measurement system is accurate if the average of its observed values is near the actual value. The mean of all values must be calculated. The mean is then compared to a standard value. The closer it is to the standard value, the more accurate the system is. Precision A measurement system is precise if all values are in close proximity to each other. The observed values must lie within a small distance of each other. Precision is a function of the standard deviation of the observed data. The less the standard deviation, the more precise the measurement system is. The features of a precise system are repeatability & reproducibility. They are: - Repeatability:This is the ability of a system to give measurements that are close to each other when the same person measures it with the same equipment. The factor that varies is time. A repeatable system confirms that measurements taken over time are constant. - Reproducibility:The ability of the system to produce consistent measurements when many people use different equipment to measure it. A strong system gives consistent measurements regardless of who is measuring it and with what. Four Possible States of a Measurement System When accuracy and precision have different meanings vis-à-vis measurement systems, there are 4 possible states that a measurement system can have in this regard. They are: - Both accurate and precise - Accurate but not precise - Precise but not accurate - Neither accurate nor precise When accuracy and precision coexist in the system, it gives measurements that are close to the standard value and to each other too. This is what every measurement system strives for. Linearity and Resolution Some more factors that determine the soundness of a measurement system are: Resolution: Resolution is the capability to see fine details in a system. This gives the system, the ability to separate different readings from each other. Linearity: A system is linear if proportional changes in input measurements produce proportional changes in output measurement. Any given measurement is used, the input variables are put in a controlled manner and the output is recorded. It is good to take 10 measurements at least. These are charted on a graph and a line is attempted to pass through these points. The extent to which all the points lie is the extent of linearity of a system. Common reasons for measurements not being linear are as follows: - Worn-out equipment - Calibration at the upper & lower ends of the range - Internal design problems in electronic measurement Steps in a Measurement System Analysis Measurement Systems Analysis is a complicated exercise. Six Sigma has a step-by-step procedure to conduct it. The practitioners must understand the focus of the exercise and the interpretation of the results. The complex calculations are done by software. The 4-step process is as under: Step 1: Plan the Study - Confirm key measures - Develop operational definition Step 2: Conduct the Study There are a few decisions that need to be made during the study. They are: - Determine the number of measurement trials - Determine the organization of trials - Use different operators - Use different equipment - Use different conditions Step 3: Analyze the Results Once the study has been conducted, the next step is to analyze the results. Whether the measurement system is good enough depends upon the usage of the measurements. In case the measurements are for precision engineering, they must be very accurate. Step 4: Fix Measurement System It is difficult to state a generalized way to fix measurement system errors. However, since we know that most of the process variation is caused by a handful of factors, we can try fixing them to fix the measurement system. - Change Equipment - Train Operators - Further Analysis A repetition of the previous step must be conducted to finish the exercise. Read the full article
0 notes
ayejayque · 2 years ago
Text
Process Mapping & Data Handling in Six Sigma
Tumblr media
What is a Detailed Process Map? The first step in the measure phase is to create a detailed process map. People get confused between a high-level process map and a detailed process map. This blog will clear the confusion. Processes have a massive amount of detail. A drill-down approach is used to represent the processes. The first process map contains 6-7 elements. These elements may even be sub-processes. This means that they have their own process too. One element of representation is what they have. When process mapping contains details from the sub-processes, it is known as a detailed process map. How is it Linked to a High-Level Process Map? The detailed process map is part of the high-level process map. It is easier to know with the likeness of a website. The high-level process map is the home page. It contains the links to other pages, which are other processes. When executed, the process reaches a given stage. Initially, the internal processes are executed. The result then passes to the next element in the process. A detailed process has the exact steps, the exact inputs, outputs, metrics, and people that are required to execute the process. The use of Swim Lanes to Simplify the Process Swim lanes is a technique used in process mapping.  It simplifies the work procedure. The process is divided into swim lanes. They represent the different people that will work on the job. A certain person is responsible if the task falls in his swim lane. Detailed process maps are usually in the swim lane format. They are multiple detailed process maps. Keeping track of who is who and what is what may be confusing. Swim lanes simplify them. How do Detailed Process Maps Help in Data Collection? A detailed process map shows the exact type and quality of the input needed and the output anticipated. It also contains info on how a certain process should behave. A detailed process map helps us know the inputs, outputs, and variables. Once the list is complete it is time to prioritize and identify the important ones. How to Create a Detailed Process Map? The sequence for creating a Detailed Process May are as under: Step 1: Choose Between a Normal Process Map or Swim Lanes Step 2: Identify Decisions and Systems Step 3: List Outputs and Inputs Step 4: Organize the Input Variables Step 5: Gather and Record Data Step 6: Evaluate and Assess Check if the ideal state of inputs is actually so. Run a simulation. Use the prescribed variables. Check for bad consequences do.  Identify the Vital Few Inputs The list of the detailed process map must be shortlisted. This is done in mays ways. The choice of tools that are available are as follows: Cause and Effect Matrix: This lists inputs from the detailed process map. They are then rated on the basis of CTQ parameters. The rows in the cause-and-effect matrix list the inputs and the columns list the critical to quality (CTQ) output. The intersection cell shows the impact of the input on that very output. The impact is rated on a scale of 1 to 5, with 5 being the highest. There is also a total column, that shows the aggregate of the input on the outputs. It is judged by multiplying the numbers. The inputs are arranged in descending order and prioritized. Priority Matrix: The priority matrix narrows down a limited number of causes out of a large number of causes in place. The way to do so is a 2x2 matrix. On one axis, resides control. It has two classes - high-control and low-control. High control means that the variables are within the scope of the project identified and are controllable. Low control means that the performance of the variables is within the scope of the project. On the other axis is the dimension of impact. The end result shows the 4 categories of variables with their usual control plans. They are: - High Control and High Impact: The team’s focus. - Low Control and High Impact: Visit the champion as variables may be out of range. - Low Control and Low Impact: Just ignore. - High Control and Low Impact: Consider the total impact of these variables. Cause Effect Diagram: Also called the fishbone diagram or the Ishikawa diagram, it is the most popular way of zeroing down on inputs. Failure Mode and Effect Analysis: FMEA judges whether a variable may be able to impact the project negatively. An FMEA score shows the risk associated with that variable. Six Sigma people use multiple analyses. This helps them look at things from different perspectives. They are assured that the correct variables have been pinpointed. Characteristics of Data - Central Tendency and Dispersion Converting Data to Information: The Six Sigma project should be low and precise in terms of producing large-scale data that ends up intimidating people. Data needs to be made into meaningful and manageable information.  The data needs to be handled statistically. Data needs to be understood for its central tendency and dispersion. Data tends to be centered around an average. How far it spreads out has a bearing on the probability. The following characteristics are involved: Measures of Central Tendency: Different types of data require different measures of central tendency. Some of the important measures are: - Mean: This is the arithmetic mean or the average of the data points involved. - Median: If all data points were arranged in ascending or descending order, the value in the center is the median. - Mode: This is the value of the most expected number. Measures of Dispersion: The degree of spread shows the probability and the level of confidence in the results of central tendency. Common measures of dispersion are: - Range: The endpoints between all the values of a data set is the range. It includes all the possibilities. - Quartiles: The data set is divided into 4 sets. Each division is a quartile. - Standard Deviation: A formula works out the standard deviation of a given set of data. It is a measure of dispersion. Different types of Data and its Importance The types of data that are used for statistical analysis are: Continuous Data: Continuous data is measured against the type that is countable. It could be the length of a certain object too. Discrete Data: Discrete data is counted instead of being measured. The values fall into one of the following categories: - Binary: Like true or false, success and failure, black or white. - Ordered Categories: The data must be in multiple ranked categories. - Unordered Categories: The data is entered into one of the multiple categories that are unranked. - Count: This is simple counting of data without categorization. Why is the Type of Data Important? The type of data impacts the analysis. With continuous data, an exact event is zero. A range needs to be used. The probability of the length of an object being precisely 2 feet is zero in a continuous distribution. If measured, discrete the results can be found out. Shapes of Data and Characteristics of Shape The shape of the data tells us the type of tools we require. Here’s how to graphically plot out the data to know its shape: Step 1: Plot Data into Categories: Data must be divided into equal categories. The categories must have equal intervals. A frequency table must be made from the available data set and the frequency of occurrence in an interval category must be noted. Step 2: Draw a Histogram: Plot the data intervals on a graph and make a histogram. A histogram is a bar chart of continuous data with equal intervals. Step 3: Join the Midpoints to Find the Shape: Plot the midpoints of the bars of the histogram. These midpoints must be joined to develop the data curve. This is the shape of the data. Symmetry in the shape is very important. The reasons for this are listed below. Characteristics of Shape The shape of data is important. One can make decisions about the probability of data based on its shape. The details are as follows: Symmetrical Data: Symmetrical data is easy to work with. There are many statistical techniques for it. It is also called the bell curve. In Six Sigma, the results of a process are likely to be distributed. Most things in nature have a normal distribution. The applications of symmetrical data are many. Skewed Data: Sometimes, the data is skewed towards one side. It can be positively or negatively skewed. Statistical techniques help us know the probability distributions of skewed data too. Statistical analysis of skewed data is usually not performed. Data Collection Plan A data collection plan describes the exact steps and the sequence for gathering data for a Six Sigma project. People who design the data gathering plan do not actually collect the data. This is to make sure that the Six Sigma project team knows the data plan. It also ensures that this information is correctly transmitted to the relevant people. The typical components of a data collection plan are: Purpose: The first thing is the purpose. This is finding the stability or capability of a process. The purpose must be defined clearly. What? These are the measurements of the data to be recorded. When metrics are being specified, we should also define the way calculations will be handled. This avoids confusion. If not done, the numbers will not be incomparable. Type of Data: We must know if the data is continuous or discrete. The people executing the data plan need this information. Who: Presently, data is collected by machines. This could be a shop floor machine or workflow software. People collect the data and display it in a format that is legible to the Six Sigma Team. Where: This is a location within the process. The data collection plan must specify where in the process the data is collected from. Frequency: Data has to be collected over a period of time. It shows frequency patterns. How to Display: The data display format must be agreed upon. A graphical method is easier to use. A statistical expert prepares the Six Sigma Plan. This document ensures the use of expertise at all steps. Data Sampling Techniques The method to collect the sample has large implications for the conclusions of that sample. Below are techniques that are used for sampling populations and processes.  Population Sampling Techniques - Random Sampling:A random sample means each member of the population has an equal chance of being selected. - Stratified Random Sampling:Sometimes, the population is broken down into strata with their own data elements. Each element has an equal chance of selection. The elements from each stratum are pre-determined. This is like random sampling. Process Sampling Techniques - Systematic Sampling:With systematic sampling, the first element is chosen at random. The next elements are chosen in a systematic fashion. This is like including every 10th element in the sample. These samples are systematic & don’t require a static population base. They can also be used for process sampling. Systematic sampling is a popular method of process sampling. - Rational Subgrouping:This technique’s main aim is to produce data for control charts. Samples are derived from subgroups at regular intervals. The data collector also decides the sample size and the interval. This should be enough to detect any changes in the process. An additional dimension is time. Studies are done and cycles and patterns are found. Planning this study requires more expertise than a random sampling plan. Many sampling techniques are available. The main four are listed above. They account for almost 80% of the sample types used in studies. A thorough understanding of these will help improve your sampling effort.   Read the full article
0 notes
ayejayque · 2 years ago
Text
The Financial Risk, Project Risk, Assessment Matrix, & Project Schedule in Six Sigma
Tumblr media
  Measuring the Financial Benefits of a Six Sigma Project The final step is to measure the financial rewards because of the Six Sigma project. This is important & is also an area of concern. Six Sigma critics say that the benefits presented are incorrect and do not accrue to the firm. They say that the true picture is often different. For getting resources sanctioned from the finance department, the Net Present Value (NPV) of the project be presented to them. The most common gains to an organization because of Six Sigma projects are: Increased Revenue: The main benefit of the Six Sigma project is to increase revenue. This could be driven by many influences. The more efficient the processes, the more goods a firm produces. It makes them cheaper than others and sells more, thereby increasing revenues. With Six Sigma projects, one could increase product quality. This adds loyalty and revenues. Avoided Costs: The firm can avoid regulatory penalties, and expansion costs if the processes are efficient. Governments and regulators penalize inefficient behavior in companies. If a business does not pay taxes on time, it has to face penalties. Six Sigma projects make these efficient and avoid such costs. Reduced Costs: The operational costs of the firm can be reduced with Six Sigma projects. Because of Six Sigma, Motorola made pagers with better features than the rivals, at a price much less. Non-Monetary Benefits: Many non-monetary benefits also come to the firms. They are indirect monetary benefits. They cannot be precisely measured and so, they are called non-monetary. Common examples are: - Bigger customer satisfaction - Improved employee satisfaction - Improved reputation of the firm Critics of Six Sigma say that analysts number fudge to increase the NPV of the projects. Efficiency is always good for the organization. Undertaking Six Sigma projects and seeing them through makes the organization better off. The Project Risk? Having the best people onboard doesn’t guarantee success. There are external factors that play a role in the eventual outcome of a project. These are the Project risks. A risk is an event that may negatively impact the project. Risks can be mitigated and prevented. These warrants understanding the risks and advance planning. The DMAIC methodology in Six Sigma has inbuilt risk assessment. The major categories of risk are: - Stakeholder Risk: Stakeholders have a vested interest in the project. Common stakeholders are regulators, suppliers, managers, and customers. Stakeholder risk arises because they may not have the drive or the capabilities required to execute the project. - Regulatory Risk: An organization faces regulations. It faces rules from the local and state governments. It faces rules of international trade bodies and also internal regulations. These are for good governance and avoiding fraud. The Six Sigma team ensures compliance with these risks. - Technology Risk: Many times, the solution requires a new technology. A business may not be in a position to acquire this immediately. This could be due to financial or operational constraints. This poses risks to the project. This can harmfully affect the implementation of the solution. - External Risk: The execution needs help and support from several outside vendors. The dependence on these vendors is an obvious risk to the project. These vendors are beyond the control of the organization. The organization cannot predict issues from external sources. - Execution Risk: The project faces the risk of not getting nonstop support from the organization. The organization may discover better use of its resources in the meantime. The project could be poorly scoped, causing it to spill over. This leads to the wastage of resources forcing the management to abandon it. An experienced Six Sigma team will give the risk assessment task to its most capable members. Good risk assessment plans, ensure the organization successfully implements the project.   How to Prepare the Project Risk Assessment Matrix? The Project Risk Assessment Matrix is a compulsory document for completing the Define phase. Here is a step-by-step guide to prepare the Project Risk Assessment Matrix: Step 1: List down the Risks in an Exhaustive Document Step 2: Rate for Probability and Impact of Occurring Step 3: Classify the Risks: All risks are not important. They need to be classified. Priority risks must be focused upon. There is a standard matrix that classifies the risks into the following 4 categories: - High Probability & High Impact - High Probability & Low Impact - Low Probability & High Impact - Low Probability & Low Impact Step 4: Decide on Mitigation Planning: The three basic strategies that mitigate risks successfully are: - Prevention:These ensure that the risk event cannot take place. This is for the high-impact risks. - Correction:These catch the risks before damage has been done. - Warning:The idea is to detect the risk as early as possible. What is a Project Schedule? A Project schedule contains vital information about the beginning and end of the five phases of the DMIAC methodology. It contains information about the project team, risks that are known, and the approval status. At the end of each stage, there is a meeting to focus on the work done in that stage compared to what was planned. The project schedule contains the dates of these meetings and their agenda. Project schedules are usually displayed on the shop floors to remind everyone about the current status and what should have actually been. Factors to Consider Before Deciding on a Project Schedule Choosing an arbitrary project date is not advised. It can lead to the Project schedule not being followed. This will make the whole process useless. A project schedule must be created by a Project Champion, a Project Lead, or someone senior. This is what they are required to consider: Historical Six Sigma Information: Many such projects have been undertaken in the past. Their documentation is present. This information should be referred to prior to deciding the project schedule. Constraints: A project team never has all the resources it requires. The resources reside with the parent organization and just need a transfer to the project team. It also does not involve much time. Sometimes, these resources need to be procured. There may be some bureaucratic hitches involved. Time should be taken to convince the management. Assumptions: We make unreal assumptions regarding the project. Sometimes, external experts are needed. These experts have other commitments as well. Experts come when they are required to. Expecting overnight miracles from them is incorrect. Sometimes training is imparted to newer members. Risks: The risks assessment document tells us about the characteristic of setbacks that a project is likely to face. This must be carefully studied prior to devising a schedule. The idea is to stretch the project team beyond its capabilities. This keeps them on their toes. Unachievable targets are a huge demoralizer. Read the full article
0 notes
ayejayque · 2 years ago
Text
All about Metrics & their Importance in Six Sigma
Tumblr media
What are Metrics and Why are they Important? Metrics are numbers. They give you important information about a process. They give you accurate measurements of the functioning of the process. They provide a base for improvements. Understanding in terms of numbers makes the understanding meaningful. Usually, a combination of metrics is used to gauge the effectiveness of the process. Types of Metrics - Operational:Operational metrics are represented by performance on the shop floor or service levels. These measure the performance of the operations function and can identify any discrepancy and its roots. - Financial:Financial metrics judge the ability to convert performance into financial goals. Both types of metrics should be understood. They suggest meaningful decisions can be made about the process. Here are some important functions that metrics serve. - Control and Feedback Loop is Driven by Metrics:The ideal state of the process has to be expressed in metrics. Metrics are numbers measured daily. What can be measured, can be managed. Metrics tell you whether the process is ok or needs external intrusion. They form the basis of control. - Metrics Make the Process Objective:Processes must be designed as per quality requirements. Metrics transform vague requirements into numbers. These can accurately map the process for efficiency. They tell us whether a process is good or needs improvement. - Improvement Goals are in Terms of Metrics:Goals need to be objective. They should be measured in numbers. Metrics play an important role. They transform the customer requirements and operational performance into comparable numbers. This way, the management can see if the customer’s needs are being met or not. The Need for Operational Definitions of Metrics While coming up with operational metrics one must not forget to begin by first clarifying and defining what they mean. This is because there is a good chance that there may be ambiguity regarding what the metric really means. To better understand how to come up with operational definitions, the following procedure must be followed. Same Vocabulary: The same vocabulary must be followed. This ensures consistency in what is being said. Inconsistency in vocabulary misleads the management. They end up making wrong decisions. The relevant vocabulary must be defined. How Data Is Collected? There is also a possibility that there could be discrepancies in how data is gathered. The results will not be comparable if the methods used are not the same. Data collection methods must have no ambiguity. How Calculations are Done? Different departments do calculations differently. This will create a discrepancy in the metrics being used. One department may round off the decimals while others may not. This may produce a significant difference in the values. Defined ways of calculating must be prescribed. Data collection and calculation must be automated. Develop the Range: Metrics never fall in the same line. The range within which they fall must be designed. This is done by making control charts. If the metrics are in the given range, then the process is fine, if not then a senior must be alerted. Classify what Happens if Variables go Beyond Range:  If the variable exceeds the upper limit, it means one thing and if it crosses the lower limit, it means something else. The magnitude by which it crosses the limits can lead to the variables being classified distinctly. To be effective, the management must have complete control over how metrics are created. Primary Metric(s) - Meaning and its Characteristics One must know the inputs and outputs of the process. These inputs and outputs are displayed via metrics. The primary metric tells how your process is performing. Let us understand the concept of primary metrics in detail. What is a Primary Metric? Any process can be defined as Y = f(x). The X is the critical input. The output is Y. The primary metric is the Y of that process. The output needs to be consistently measured, errors must be spotted, and corrections must be made. It is important for a Six Sigma project to have the correct primary metrics. This makes sure that the project is under control. The wrong primary metrics will make wrong measurements. Traceable to Critical to Quality Measures The primary matric has been present in the project since the first step. The primary metrics reside in the critical to quality (CTQ) measures defined by customers. They are recorded in the problem statement, goal statement, and whole business case. They become the primary metric if the project is correctly executed. What are the Characteristics of an Appropriate Primary Metric? - Accurately describes the desired condition:Metrics describe a condition in the process. - Time lag should be minimal:Metrics must be seen by the management for corrective action if the project is going off-track. - Not open to manipulation:If the metric allows this, it has not been efficiently chosen. A good metric is non-human. Secondary Metric(s) - Meaning, Purpose, and Identification A primary metric-driven process is not foolproof. Operations managers always face trade-offs. With everything good, there is a chance that something bad will come in too. To prevent this, secondary metrics are needed. What is a Secondary Metric? The primary metric measures what needs to be fixed, and the secondary metric measures what must not be broken. What Purposes does the Secondary Metric Solve? Holistic Picture: The secondary metrics give a holistic view of the operations. The primary metric conveys information about one of the outputs. Information about the other outputs is obtained via secondary metrics. Problem Shifting: Secondary metrics ensure that workers do not shift problems in the name of Six Sigma projects. Trade-offs are a part of the process. Something needs to be sacrificed for a certain gain and should be monitored explicitly. Secondary metrics help us in the following: How to Identify Secondary Metrics in the Process? If many secondary metrics are chosen, it may become difficult to keep track of them. This is how the best secondary metrics are chosen. Other Critical to Quality (CTQ) Measures: The most important CTQ measure becomes the Primary Metric. The other CTQ measures become secondary metrics. Because they are critical, their value needs to be controlled. The top few metrics that determine the quality of output become secondary metrics. Assume Future Problems: A way to discover secondary metrics is to figure out what can go wrong with the Six Sigma project. Anything with a plausible chance of negating the positives of the Six Sigma project is a secondary metric and needs attention. Read the full article
0 notes
ayejayque · 2 years ago
Text
Steps in the Define Phase of the Six Sigma Projects
Tumblr media
Step 1: Collect and Review Primary Information The project starts by assigning responsibility to the Project Champion. The Project Champion delegates the preparation of the Project Charter to a team member. The Project Champion and the team member must understand the information required. Here are tips on how to collect and review primary information from the management. Where Do Six Sigma Projects Come From? Six Sigma projects do not suddenly appear. They are the result of conscious and rigorous planning. The top management maps the capabilities of a business to the deliverables they anticipate and find gaps. Six Sigma projects are recommended by workers who are in the best position to advise enhancements. A Six Sigma project is driven by business problems. Therefore, the information vis-à-vis the problem to be solved is required. What Information should be received before embarking on the Project? Before commencing work on the project, there is very little required information. This information relates to the objectives of the project. These objectives are extensions of the business problem identified earlier. The time frame needs to be made very clear. What needs to be done with this Information? Many times, the management does not give the Project Champion exact Six Sigma project statements. The information is vague and too broad to be meaningful. Such a statement is just a symptom and not the real reason. The information given must be made comprehensible, so a lot of digging is required. The Project Champion must: - Retain all the information he/she is given - Clarify, in case the objectives are too broad - Conduct an analysis that reveals the real issues Why is this stage critical? The project team commences work in the light of the data it has received. There must be no ambiguity about storing this information. Down the road, a lack of data can be damaging. It can cause the team to work without critical data or engage in expensive and time-consuming data gathering. A misunderstanding may send the team on a completely wrong track. The activities in this stage should be assessed very carefully. Step 2: Defining a Scope for Your Project Scoping begins when initial information about the Six Sigma Project is received. Let us explore the world of project scoping. Let us discover how to use it. The project scope defines the contents of the process. Processes are usually nonstop. When one process ends, the other one starts. If it is neglected, the work goes haywire and unfocused. An accounting cycle begins when raw materials are procured and ends when cash for goods sold is received. It begins and ends with cash. There are many steps in between. These are converting raw materials to work in progress, converting work in progress to finished goods, transferring finished goods to inventory, and selling the inventory on credit. In a Six Sigma project, one must know that they are trying to improve processes. What information should be included in the project scope document? The most important information in a scope document is the boundaries of a process. It must define clearly the start point and end point of the process. Project scoping is done with high-level processes. Many activities in the process have their own sub-processes.  This is why the scoping document must explicitly state what processes constitute the project. Why is it important? The Project scope is vital to the project execution. This document is the basis for the requirements of the project and mentions the resources that are needed in the project. Some important information in the project scope are: - What processes or activities are not involved? - What are the material resources needed? - What human resource is needed? - What expertise level is required? - What technological resources are required? Reviewing Project Scope The completed project scope document must be sent for an immediate review. The following points must be considered in the review process: - Overlapping with any other present or future project - The Project scope should not have multiple functions - The Project scope should not have multiple products - The Project scope should not have multiple regions If the project scope does not meet these requirements, it is red-flagged. A well-defined project scope is critical. If the project scope is too wide-ranging, a Pareto analysis identifies the key issues and narrows the project scope to make it more manageable. Step 3: Defining a Project Problem Statement The next post in the Six Sigma journey is to have a problem statement that will guide the team during the project. Here are a few tips that give us an understanding of how a project problem statement must be developed. What is a Problem? A problem is a difference between the expected state and the actual state of affairs. In businesses, problems can occur in many forms and can have multiple causes. Sometimes problems are hidden. What we conceive as problems are only symptoms in reality. The Problem with “Problems” Business problems have various dimensions and people interpret these dimensions separately. The common problems that happen  because the problem was not accurately diagnosed are as follows: - Hasty Decisions:Businesses suffer because of making hasty decisions. They make resource commitments that are not required and prove to be a leaking drain in the long run. - Assumed Common Understanding:The problem must be stated down and discussed with the team. The Project Champion must quiz the team members to make sure that they have the correct knowledge vis-à-vis the problem. - Assumed Causes or Solutions:Humans jump to conclusions. It is our tendency. Sometimes, we jump to the wrong conclusions too. Never be judgmental when defining a problem. This narrows our thinking and we are not able to think clearly. Convert Your Regular Problem to a Six Sigma Problem A routine and regular problem must be converted to a Six Sigma Problem. The Six Sigma Problem is a difference between the desired and actual state. It should be defined without any ambiguity. The questions that are usually answered are: - Who is affected by the problem? - What causes the problem? - When does the problem happen? - Where does the problem happen? - What is the business impact of the problem? (The amount of revenue lost, time lost, employee & customer inconvenience) This Six Sigma problem gives a concrete goal statement to the project execution team. The team works on it. Let us see it through an example. Example Normal Problem: Employees come late to work causing a loss in productivity. Six Sigma Problem: At a certain factory, 45% of the employees report to work within 15 minutes after the time that they were supposed to. This causes a fall in daily productivity by 5%. Step 4: Develop a Business Case for your Project Now is the step to develop a well-articulated business case. The management has to choose between many Six Sigma projects while granting resources. The cases that get the resources from the management are clear cases of compelling value propositions. What is a Business Case? A business case uses the problem and the goal statements and converts them into a business statement of value. The management must acknowledge that a problem exists. That way you will have a goal after the management reads your problem and goal statements. The problem must be urgent on the priority list of the management. What makes a Business Case Compelling? - Strategic Linkage: The management has to choose between many competing strategies. All may be beneficial. The management must choose which will be best and follow it. A better business case has the ability to make the organization believe that the Six Sigma project is in line with its strategic objectives. - Benefits: Problems exist at all levels. The lower-level management is expected to solve them. The Six Sigma project is managed by the top management. They are concerned with the benefits that your project will give to the organization. The project must have benefits like cost savings, increased service levels, and increased efficiency etc. - Link to Problem and Goal Statement: A business case must pinpoint losses that occur because of the problems identified in the problem statement. It must talk about the benefits that will be gained by achieving the goals mentioned in the goal statement. The difference between the problem statement and the goal statement is the business value of the Six Sigma project that is to be undertaken. - Brief: A wordy business case can destroy a well-identified problem and well-selected goals. The process owner must come up with a case that is easy to comprehend. This will make it most convincing to the management. Example: The loss in productivity because of employees coming late is $5 per minute per employee. For 1000 employees (40% of the workforce), the management loses $5000 per minute for 15 minutes. This is $75,000 every day. If the organization thinks saving $75,000 is their priority, they will take up the business case. Step 5 - Project Charter - Meaning, Importance, and its Elements What is a Project Charter? A Project charter collects all the information that has been developed thus far and puts it in a secure location. The Project charter is the constitution that governs the working of the project. It also governs disputes that may arise during the execution. Importance of the Project Charter The project charter is the final document of the Define Phase. This document is proof of the activities that have been undertaken thus far, including the agreement reached amongst the members of the team, the stakeholders, and the management. This charter is used to see whether the project is going as per plan or not. In the end, actual benefits are compared with the predicted benefits to declare the project either a success or a failure. The Project Charter controls the entire project. Elements of the Project Charter A Project charter has 5 - 6 elements. This depends upon the nature of the project and its requirements. The normal elements of Six Sigma project charters are: - Purpose: To be derived from problem and goal statements - Value: To be derived from the business case - Scope: To be derived from the high-level business flow - Team: To be decided as per the roles prior to the beginning of the project. - Schedule: To be prepared as per the deadlines provided at the start of the project - CTQ Measures: To be assembled from the information gathered in the problem and goal definition phase Accessibility and Modification The Project Charter must be accessible to all at any point in time. The document and its interpretation are critical to ensure that the project is headed in the right direction. Modifications to the charter must not be allowed under normal circumstances. If something needs to be included or corrected in the project charter, it must be done after due diligence & everyone must be informed about it. Develop a Goal Statement corresponding to the Problem Statement A Six Sigma team concentrates on the solutions and not the problems. When the problem has been identified, it must be turned into a goal statement. What is a Goal Statement? The Problem statement sees the problem at a minute level. The Six Sigma problem gives it structure. The details & measurements of the problem must be noted. The ideal state of affairs must be contemplated. The ideal state of affairs must be written. It must have the same details and measurements as the original goal statement. Here is how to do so: How to Create Goal Statements from Problem Statements? - Focus on the Numbers:The numbers in the problem give it objectivity. - Start with Action Verb:The word that is used at the beginning of the goal statement has deep implications. - Completion Date:A goal without a conclusion date is just a wish. SMART Goals The acronym for setting goals is SMART goals. Goals with SMART features succeed more than others. Here is what it means: - Specific:This refers to the scope of the goal. - Measurable:This refers to the numbers that make the goal measurable. - Actionable:The goals statement must be within the organization's control. - Relevant:The goals must be aligned with a greater relevant strategy. - Time-Bound:The goal statement must contain goals and deadlines and not be mere wishes. Tips for Writing Effective Problem and Goals Statement The goal and problem statements help in the execution of the project. They make us understand what components make some statements better than others. Here are a few tips: - Consider the Customer's Point of View - Consider Critical to Quality (CTQ) Measures - Use Measurements to Remove Ambiguity - Be Concise - Don’t Jump to Conclusions Inculcating these measures will make your problem and goal statements robust. They help you define the resolve of your project.     Read the full article
0 notes
ayejayque · 2 years ago
Text
Six Sigma Tools - FMEA, AHP, TRIZ Matrix, Kano Analysis, & the Pugh Matrix
Tumblr media
Failure Mode and Effects Analysis (FMEA) All products and services fail! Even Six Sigma-compliant processes fail. We see the possible sources of failures, the effects that they have, and how to prioritize these failure modes. An analysis of these three makes our planning even more robust. Seeing possible failure modes and mixing them into design reduces the imperfection of the product/service under the spotlight. The Failure Mode and Effects Analysis (FMEA) is a tool for doing this. The FMEA was initially implemented in the aerospace industry in the 1960s. Presently, it is an integral part of all projects where safety and reliability are the foremost apprehensions. The automobile industry extensively uses FMEA. Ford Motors was a pioneer in using FMEA. Different industries have used the FMEA analysis. These analyses differ from each other, but the core remains the same. The core is to out-design problems before they happen. List out Potential Failures: The Pareto principle is applicable to failures too. Most failures have few underlying causes. It is imperative that these causes are properly managed. The FMEA relies on the skill of the people performing it. The failure modes are reached by brainstorming. One needs to list down all the thinkable ways in which the process may go wrong. Attempt to Design Failures Out of the System: Once failure modes are identified, prevention and reoccurring are the main goals. This can be done by using the following: - Error Proofing:This needs to be done when the failure mode has high priority. There is a high probability of the failure happening and if it does ensue, the complete system is disrupted. This is where prevention is better than cure. Typically, engineering and management teams are formed and asked to find solutions that mitigate this risk. - Increasing the Variability of the Process:Sometimes we change the process to eliminate the risk. This may have operational losses. A cost-benefit analysis is done to determine the implementation of this strategy. An instance to see the use of this strategy is Henry Ford’s Model T. Ford produced cars only in black color. He did away with the difficulties that could have arisen if multiple colors were used. - Control Planning:The last step is making a plan to control failure. This strategy depends on the speedy detection of failure and setting the control plan swiftly. This is implemented for smaller risks that are expected, & don’t threaten the business. FMEA is Subjective The FMEA analysis is highly subjective. It depends on the experts to solve problems. Two people doing the same analysis will have different results. This means that the analysis is only as good as the individual conducting it. How to Implement FMEA The FMEA depends upon the people who are using it. People can also be trained to implement this analysis. The insight required necessitates the guidance and experience of senior personnel. The step-by-step of the FMEA is given underneath: Review the Process: Understand the process deeply. Usually, we assume many things. We assume that electricity will always be present, the raw material supply will always be consistent and so on. The FMEA explicitly states the inputs and the pre-conditions of the process to work. By mentioning what is needed, the practitioners are prepared for the next step. Brainstorm for Failure Modes: We listed all the factors of failure of the process in the above step. In this step, brainstorming for failure modes is done. The team needs to state ways that make the process go bust. One factor at a time is considered and then suppositions are made. All these suppositions are possible failure modes of the process. Rate for Occurrence: Once the failures have been listed, they need to be graded for likelihood of occurrence. This is done by assigning a score of 1,4 or 9. 1 means a very low chance of occurrence, 4 means a medium chance and 9 means a certain event. Rate for Severity: These modes are then graded for the severity of the outcome. This is done by assigning a score of 1, 4 or 9. 1 means low disruption, 4 means medium disruption, and 9 means a major disruption. Rate for Possibility of Detection: The failure modes are then graded for the likelihood of detection. One needs to consider the present detection mechanism that the business employs. The time frame to detect must also be considered. Familiar gradings of 1,4 and 9 must represent the severity. Multiply to Get Risk Priority Number (RPN): Time to multiply the three numbers to know the Risk Priority Number (RPN). The first three characteristics of any failure modes have been separately graded. This time they need to be combined to know the true threat that any failure mode faces. Decide Cut-off and Prioritize: Lastly, we arrange the failure modes in descending order. This is based on the scores generated. These scores are then ranked to decide the cut-offs. These are the ones that must be error-proofed, and controlled, and where the process requires a change. What is the Analytical Hierarchy Process (AHP) and the way to Use it? Analytical Hierarchy Process (AHP) is a problem-solving tool. The AHP method was devised after knowing the structure of a problem and the real obstacles that managers face while solving it. The Need for AHP The AHP sees the problem in three parts. The first part is the issue to be resolved. The second part is the alternate solutions that are available for solving the problem. The third is the criterion to evaluate the alternative solutions. The AHP knows that there are several criteria. Each criterion is different. If you need to choose between two restaurants, the taste and the waiting time are the two criteria. The taste is far more important than the waiting time and so on. Give a weightage of 2 to taste and 1 to waiting time. This way you will know which eatery meets your requirements better. While evaluating alternative solutions, weights must be attached to the criteria. This ensures reaching the best conclusion. Lately, management scientists have had problems when assigning weights. In the example above, the assignment of weights was arbitrary. It only had two criteria. As the number of criteria (factors) increases, the assignments become more and more subjective. The AHP method has intrinsic checks and balances. These checks ensure that you get logically consistent solutions when you compare the importance of the criteria in assigning weights to them. AHP is one of the most popular techniques in management science. Managers at General Electric, Ford Motors, and Motorola use it in their Six Sigma projects. The Connection Between AHP and Six Sigma AHP is a distinct technique. It is not a part of Six Sigma methodology. It was developed after the Six Sigma methodology. It has found large-scale applications in Six Sigma projects. AHP is used to assign numerical weights to factors. These factors could be used by the customers while assessing a product or they could be used by the management to appraise substitute solutions. Drawbacks of AHP The AHP has its own issues. This involves higher-level arithmetic. It is based on eigenvectors. Doing AHP calculations on an Excel sheet is a suffering. Lately, software tools have been developed for this purpose. How to Use the Analytical Hierarchy Process (AHP) AHP is perhaps the most advanced method available in the field of management and operations research. The complication in using this tool makes it problematic to apply. Software now automates the mathematics-intensive portion. The user follows a simple way of data collection. The gathered data is then fed into the tool to get the results. Here is how it is done: Step 1: Define Alternatives – The process starts by defining the alternatives to be evaluated. These alternatives could be different criteria, that solutions must be assessed against. They could be different features of a product that need weightage to better know the customer’s observation. At the end of this step, a complete list of all alternatives must be ready. Step 2: Define the Problem and Criteria – According to AHP, a problem is a set of sub-problems. The AHP relies on breaking the problem into smaller problems. This is when the criteria to evaluate the solutions emerge. A user can go on and on to deeper levels inside the problem. When to stop breaking is a subjective judgment. If a business needs to decide on the best investment option and AHP is used, the problem will be broken down into smaller problems like safety from downfall, supreme chance of appreciation, market liquidity, and so on. All these sub-problems can still be broken into smaller problems till the management feels that the required criteria is reached. Step 3: Establish Priority amongst Criteria Using Pairwise Comparison – AHP method uses pairwise contrast to make a matrix. The business will weigh the relative importance of safety from downfall vs. liquidity. In the next matrix, is a pairwise comparison between liquidity and appreciation. The managers fill in this data as per the customer's expectations. Step 4: Check Consistency – This is built into most software that solves AHP problems. For us liquidity is twice as important as protection from downfall and the protection from downfall is half as important as appreciation, then the following emerges: Liquidity = 2 (Protection from downfall) Protection from downfall = ½ (Chance of appreciation) Liquidity must equal the chance of appreciation. If in the pairwise comparison of liquidity and appreciation, the given weight is more or less than 1, then the data is inconsistent. Inconsistent data gives inconsistent results. This is where prevention is better than cure. Step 5: Get the Relative Weights – The software tool will do the mathematical calculations. It is based on the data and assigns weights to the criteria. Once the equation is ready, one can evaluate the alternatives for the best solution that matches their needs. The TRIZ Matrix & How it was developed? A Russian scientist started a study to evaluate the patents worldwide to see patterns. He discovered that there were patterns across all industries. All industries have gone through the same technological cycle. There was a remarkable similarity in the order in which the patents were filed and the principles that were fundamental to these patents. The Russian documented his findings and shaped the TRIZ matrix. TRIZ Matrix Basics TRIZ Matrix calls a problem a contradiction. This Russian scientist saw that improving a product in one facet leads to deterioration in another facet. If a bigger engine, gives the car speed, the mileage would take a hit. This is a “contradiction”. TRIZ addresses how to get more of one facet without worsening the performance of other facets. The Russian found out that was at the core of all filed patents. Most “contradictions” in different industries are similar. Because of this, their solutions are also similar. The TRIZ matrix creators documented all their solutions. These are the 40 inventive principles of problem-solving. The use of the TRIZ Matrix The TRIZ matrix is for generating alternative solutions. It is a way to assess issues and get ideas about the most credible solutions. Many solutions have been successfully implemented in the past and are tried and tested. With the TRIZ Matrix, one can develop many solutions. This helps with the rational process of gauging all the solutions and then coming up with the most possible one. Using A TRIZ Matrix The TRIZ matrix is a problem-solving methodology. We start with the specific steps that apply to the TRIZ method. We must understand the logic behind those steps. This logic is the TRIZ methodology. TRIZ Problem Solving Methodology The TRIZ method believes in a set of “inventive principles”. It says that every problem that a business faces can be abridged to a general problem that has been faced earlier. This general problem has a solution based on the 40 inventive principles that it contains. This general solution can be used to devise a specific solution to the specific problem that the business is facing. The terminology is confusing. Let’s consider a scenario. An organization faces an issue. The TRIZ steps are: - Identify contradiction - Determine improvement and degradation - Design constraints - Examine the proposed principle - Select the finest principle - Apply the inventive principle Kano Analysis - Meaning, Application and Implications The Kano Analysis is a simple tool. It has grave implications for quality management. It has brought a paradigm shift in the way quality is seen by businesses across the world. The Kano analysis changed the measurement of quality from one-dimensional to two-dimensional. One-Dimensional v/s Two-Dimensional Quality Quality used to be measured on operational parameters. The company with the least number of defects had the best quality. In modern times delivering defective products is just out of the question. This quality measure did not represent the current state. Dr. Noriaki Kano saw this issue and deduced that quality must be measured as a connection between two dimensions. The two dimensions were customer satisfaction and need fulfillment. Attributes of the product are placed in one of the four quadrants of the coordinate system created by the juncture of these two axes. Application of the Kano Model The Kano is an interface between marketing and operations. Both teams need to coordinate for the correct application of the model. This way good benefits are reaped. The application of the Kano model has the following steps: Understanding that All Needs are Not Equal: The Kano Model helps businesses separate and order needs. Some attributes of a product are compulsory. Some are just nice to have. Some make no difference to the clients. Helps in Conducting Surveys: Customer feedback must be incorporated into the product. Customers tell what they want and what they do not want. They don’t know the scale of the need or how compelling it is. Whether it is or not has serious consequences on the benefits reaped by the consumers. Surveys must be planned to incorporate this aspect. Empirically surveys using the Kano Analysis have been successful in discovering the customer needs and aiding the management to fulfil them. A Lean Product: With Kano analysis, businesses can add features that give maximum pleasure to the customers. Indifferent features can be removed. This helps in providing the customers with what they want at the desired price. Implementing the Kano Analysis The Five types of features: - Must Be:These are features vital to the product/service offering. - One-dimensional: These are the features on which the rivalry is usually based. - Attractive:These are features that can help the business attain customer delight. - Indifferent:These are features about which customers don’t care. - Reverse:These are features that must not be fulfilled. Implications of the Kano Model The Kano model has been used by all major corporations in the world. Some important operational concepts have grown out of the Kano model. They are: - Modular Product:Many companies have designed their products as modular. - Variants:Corporations unable to create modular products, create variants of the same product. - Stripped-Down Versions:Corporations like IKEA have succeeded in selling stripped-down versions of their products. What is the Pugh Matrix and How to Use it? The Pugh Matrix is one of the most popular ways of finding out the best solution from a number of alternate solutions. The popularity of the Pugh Matrix is in its simplicity. The tool is not mathematically intensive and simple to use. It comes up with the same solutions as mathematically intensive solutions although with much less sweat. Here is how to use the Pugh Matrix. Step 1: List down the Criteria in a Vertical List - List down the criteria for evaluation. The criteria are listed in a vertical list on the extreme left of the spreadsheet. The possible criteria could be many. The Pugh Matrix suggests using technological impact, cost impact, and organizational acceptance. Step 2: Select the Datum - The Datum is what the Six Sigma project team thinks is the most feasible solution. Selecting the suitable datum is vital because every solution will be assessed against the datum. Step 3: List down the Alternative Solutions Horizontally - List down all the alternative solutions horizontally. This will form a matrix with the criteria on the vertical axis and the solutions on the horizontal axis. Step 4: Designing the Pugh Matrix - Pugh Matrix is designed using “+”, “–” or “S”. - “+” is a particular solution score which is better on a particular standard as associated to the datum - “–” is a particular solution score that is not better on a particular standard as associated with the datum - “S” is a particular solution score that is the same on a particular standard as likened to the datum - The datum has an “S” rating assigned to its criteria. It is the same as itself and has a score of 0. - No numerical values are given to the positives or negatives. This is a shortcoming of the Pugh Matrix. If a solution is slightly inferior to the datum or very inferior to the datum, both will have the same score on the matrix. Here, the Pugh Matrix relies on human judgment. Step 5: Aggregating the Scores - The scores are accumulated by counting the number of “+” and “–” of a given solution. Sometimes, weights are given to the criteria. So, gaining a “+” on 1 criterion will be considered as 1.5 “+” when aggregating the scores. - The “+” and “–” are used to compile the final score. If a solution has a score greater than 0 (the datum), it is selected. If all the scores are lower than “0”, then the datum is the final solution.  Read the full article
0 notes
ayejayque · 2 years ago
Text
Six Sigma Tools - Five Whys & VOC (Voice of Customer)
Tumblr media
The 5 Whys Analysis The Five Whys analysis or the root cause analysis is one of the 7 basic tools of Six Sigma. The principal idea is the fact that for every effect there is a cause. So, a quality problem can be seen as an effect for which there are one or multiple causes. There is a whole chain of symptoms before the cause actually shows its effect. This helps the management to pinpoint and solve the problem from its root cause. Jeff Bezos & the Application of Five Why’s: Jeff Bezos showed how the Five Whys can be used. He visited one of the shop floors at Amazon. There, he saw that the fingers of one of the employees was caught in the conveyor belt and the employee had injuries. This is how Bezos discussed this incident. Question: Why did the assistant damage his thumb? Answer: Because it got caught in the conveyor. Question: Why did his thumb get caught in the conveyor? Answer: Because he was chasing his bag on a running conveyor. Question: Why did he chase his bag? Answer: Because he put his bag on the conveyor, but it then turned-on to his surprise. Question: Why was his bag on the conveyor? Answer: Because he used it as a table. Conclusion of the Case The assistant simply needed a table. There was no one around, so he used a conveyor as a temporary table. To eliminate this from happening again, there is a need to provide tables at the appropriate workstations or provide portable, light tables. This is for the associates to use and also update. Also, a greater focus on safety training is required. We must also look into preventative maintenance during standard work. The Five Whys Methodology The Five Whys is a powerful tool. It will help you sift through all the symptoms. These symptoms are the surface level issues to the root cause. Resolving this root cause solves all problems that exist in between. Subjectivity Involved The Five Whys process is semi-structured. Different people, when they use it, will have very different results. The process is only as good as the person behind it. It is important to ensure that the team is cross functional and really involved and motivated to obtain the best results for the process. How to Effectively Use Five Why’s? The 5 Whys is a basic tool when it comes to Six Sigma. Its importance cannot be discounted. There is a lot of subjectivity involved with the usage of the tool. To obtain best results, it should only be used by a team of cross functional experts. Here are the steps to follow, in order to get the best results: Step 1 - Be Careful While Creating Problem Statements: 5 Whys is used to move past symptoms and make progress into finding and solving the root cause. The problem must be framed in the correct manner. The definition of the problem should be objective. This means that it must have facts & measurements. It must have no ambiguity for words like “more”, “less” etc. Well defined problems can always be worked upon. Step 2 - Honesty – Avoid the Blame Game: Power and politics are a hindrance in the Five Why’s analysis. Sometimes, brainstorming sessions do not work because departments deliberately shield of problems, in order to avoid being penalized for being inefficient somewhere in the past. All participants need to be objective. There must be no penalties on bringing out the past and present shortcomings. It should be encouraged instead. The rules of the discussion must be made clear before the process begins. The focus must be on results and not on the people involved. Organizations that do this succeed with their Six Sigma endeavors. Step 3 - Parent Child Diagram: The first step is to have as maximum problems on the discussion board. Once the answers begin to get repetitive, one must start the mapping of the levels of causes. If A causes B, B causes C and C causes D. In this case A is Level 1 cause, B is level 2 cause and C is level 3 cause. This is so when we are solving D as a problem. Step 4 - Ensure That the Cause is Systematic: A systematic cause is where the system is to be corrected. A Six Sigma process does not allow errors. The root cause analysis should ensure that it is not degenerating a finger-pointing activity. The idea should be to make the system efficient. This will make sure that it does not allow any errors no matter what. Voice of Customer (VOC) The Voice of Customer is not a tool for any Six Sigma process. It is the underlying philosophy of the Six Sigma process that each process improvement exercise is based on the customer. The Voice of Customer is closely related to the Kano Analysis. The VOC is broader as compared to the Kano analysis. The VOC needs to be conducted at the start of any Six Sigma project to make it successful. Voice of Customer is an exercise that is an interface between marketing and operations. The methods are followed by the marketing department. In Six Sigma methodology it is important that operations department be a part of this too. The next step after understanding the VOC is to have Quality Function Deployment. The idea is to convert the Voice Of Customer into the Voice Of Engineer. This builds customer satisfaction for the organization. Researching the Needs There are many ways to know the needs of a customer. These methods can be used in isolation or even in combination. They are to verify the needs that have been gathered through different methods. Some of the common methods are: Lead Users: Lead users are innovative users of the product. They use the products in ways beyond its intended usage. These users are an important source of information for any business. Apple is known for understanding the needs of its lead users. This is why they create innovative products. Focus Group: Focus group means interviews of a few customers. The interviews are unstructured. Customers are asked to state their opinions about the product/service in question. This is moderated to ensure that it does not go off track. Domino’s Pizza conducted the focus group research to know that clients thought its Pizza tasted like cardboard. They introduced different crusts to meet the needs of assorted customers. Sample Survey: Customers are asked to fill a questionnaire. Standard questions are asked and opinions are noted down. These are then used to draw inferences by using statistical procedures. This approach is highly structured. Customers have needs. They think these are obvious and therefore not spoken about. This method doesn’t gather such data. Warranty Data/Customer Returns/Feedback: Another way to know the VOC is to see data where clients show their dissatisfaction. Warranty data shows the features of a product that do not work. Customer returns also show the causes for dissatisfaction. Negative feedback is also an important source of information. Challenges Faced in Conducting Voice of Customer Exercise VOC may sound simple. But anyone experienced with market research will know the issues involved. For a Six Sigma project to succeed, the VOC must be accurate. A few common problems are listed below: Contradictory Data: Customers in similar segments will have similar needs. VOC explains that this may be otherwise. Customers from the same demographic and psychographic segments, sometimes have totally different needs. This shows that the initial segmentation of customers has been done wrongly. The VOC helps understand if the basis used for the segmentation is correct or not. If customers give contradictory feedback, then this can be raised with the marketing team. The Six Sigma project & process is useless, if the output does not meet the needs of the customers. Continuous VOC: VOC is not a one-off exercise. Customer needs evolve constantly. So are the technological advancements that enable us to fulfil those needs. Customers get accustomed to the product features. What delights the customer today, will be a standard feature tomorrow. The challenge of VOC is to create a responsive process. This process can listen to the evolving needs of the customers and change accordingly. This feedback-based control is used in internal processes of an organization but rarely used in customer facing processes. Capturing and Organizing: Many organizations have implemented CRM systems. This is for a continuous VOC. These systems need to be imbibed in the culture of the business. Many customers penalize the agent who is at the receiving negative feedback. This builds a tendency to hide and withhold negative feedback. Businesses must reward their agents that point out the gaps in their VOC process. This is a cultural trial. It requires coordination between marketing and operations department to foster a culture that is entirely customer centric.     Read the full article
0 notes
ayejayque · 2 years ago
Text
Six Sigma Tools - The Pareto Chart & The Scatter Plot
Tumblr media
What is Pareto Analysis and its Application in Six Sigma Projects What is the Pareto Principle? Vilfredo Pareto was a famous Italian economist. He was analyzing the distribution of income among the people of Italy. He observed that 80% of the income went to 20% of the population. He then began observing this 80/20 principle across everything in nature. He found it to be universally correct. This principle is called the Pareto Principle.  The Pareto Principle says that doing 20% of the correct things will give you 80% of the results you seek. It is all about isolating the vital few from the trivial many. The idea is to work on the vital few to get best results. The words “vital few” and “trivial many” are from the Pareto philosophy. What is Pareto Analysis? Pareto analysis is a six sigma quality tool. It is based on the Pareto Principle. Once you take charge of a certain department or work area, there are many problems that crop up. Sometimes, these problems are so many, that it can be tedious for a manager to give sense to any data he has. Managers use Pareto Principle and segregate the following: - 20% of the clients that bring-in 80% of the revenue - 20% of the defects that cause 80% customer complaints - 20% of the activities that produce 80% of the defects How Can Pareto Analysis be Applied in Six Sigma Projects? A manager solves many problems. The resources at his disposal are limited and need to be put to optimal use. Resources should be used to tackle and solve problems. These problems are usually the ones that will be most beneficial or reduce the most hassles. On must conduct a Pareto analysis. This will make sure that all processes are devoid of defects. In a Pareto analysis, one finds out the chief causes that cause variation and rectify them. This is recommended by management scientists who use data for decision-making. According to them, the data is overpoweringly in support of the Pareto principle. How to Create and Read a Pareto Chart? A Pareto chart is one of the 7 main tools of quality control. All students of quality management need to know how to prepare and read a quality chart. The procedure is as follows: Step 1: Find the Causes We start with finding the underlying causes. These causes may have positive effects like revenues or profits of a company & will need to be maximized & intensified. Or these could cause negative effects like losses or defects & will then need to be minimized & subtracted altogether. Step 2: Prepare a Frequency Table Once listed down, the causes need to be checked for their importance. This is done by running simulations of the process many times & noting the outcomes. A frequency table is meant to help you do so. Most managers like a cumulative frequency column. Step 3: Convert it into Percentages Once causes are listed and a frequency table created, it is time to convert the numbers into percentages. Percentages are automatically easier to understand than steady numbers. Step 4: Arrange in Descending Order This stage is to arrange the causes. The most important ones must be at the top and the lesser once be at the bottom. The arrangement must always be in descending order. Percentages also need to be taken track of. The causes that come before 80% is reached are the vital few factors. One can see with the naked eye to know the important factors and make necessary moves to achieve the required outcomes. How to Read a Pareto Chart? A Pareto principle displayed in a tabular form is a Pareto chart. A Pareto chart may be puzzling because it shows both the individual and cumulative data. Let’s see how a Pareto chart I read. Read The Bar Charts for Individual Values The Bar charts connected to the X axis are always in descending order. These represent the most important factors. They give the user a chance to know their individual values. Read The Line for Cumulative Value The line at the top shows cumulative values. This allows the user to know the cumulative values. A user can see all individual and cumulative values at a glance. This helps to segregate the “vital few” from the “trivial many.” What is Scatter Plot? A scatter plot is a graphical tool. It is designed to provide a view of the process at a single glance. The scatter plot studies the association between the important variables. When two variables are involved, it is called a bivariate scatter plot. When more than two variables are involved, it is known as a multivariate scatter plot. What is Correlation? Correlation is the degree to which two variables vary. For example, whenever the cycle time is high, customer dissatisfaction is also high. Correlation is on a scale of +1 to -1. +1 shows perfect correlation and -1 shows negative correlation. However, perfect correlations never exist and if you see one, it should be doubted. If two variables were being recorded and a high degree of correlation existed, it would give useful information to the management. Relationship May or May Not be Cause and Effect Correlation is often confused reasons for causes. Just because we know that the two variables tend to move in tandem, it does not mean that we have proof that one causes this to the other. Suggesting connection could lead to unintentional losses. Best Used When One Variable is Under Control Correlation works well when one of the variables is under control. This is known as the independent variable. People who experiment can vary one variable and record the other variable. This will determine the extent of the correlation. Why Visualization is Important? Management forecasts many variables before they agree to a budget. Correlation aid to come up with the levels of these variables. This is how accurate budgets are developed. Scatter plots are important tools for visualization. This is because points that are farther away from where most of the points are scattered have the ability to influence the correlation co-efficient. This is the summary statistic. What Does a Scatter Plot Look Like? A scatter plot is based on X and Y axes. One variable is assigned the x-axis and the other one is assigned the y-axis. Each point on the graph corresponds to the x and y axis. In case of 3 dimensional or multivariate analyses, more axes are incorporated. This are complex visualizations and they need complex software to be interpreted. How to Draw a Scatter Plot? Drawing them is a complex task. Usually, statisticians and scientists are involved. Most of the modern drawing has been automated. Still, human involvement and human judgement is required. The steps to draw scatter plots are as follows: Step 1: Decide the Two Variables The most important step comes even before the analysis begins. In textbooks, we always know the variables under scrutiny. In real life, there are many variables and many instances of correlation that are possible. When selecting variables, there exists a material relationship which if understood, will benefit the process. Step 2: Collect Data Once the variables are known, data needs to be gathered to draw meaningful conclusions. This is done by applying the relevant design of experiment and coming up with measurements that will act as inputs into the system. This process follows the principle of GIGO i.e., Garbage In Garbage Out. Step 3: Map the Data The data collected must be mapped on the X and Y axes of the Cartesian Co-ordinate system. This will show the viewer where most of the points are centered, where the outliers are and why this is do. In modern times, this is done using computers. There is software around that will fetch the incoming data in real-time and map it on a scatter plot. Step 4: The Line of Best Fit The next step is to compute the line of best fit for all the data points. Mathematically a line is worked out that fits through most of the lines and is closest to them. This line denotes an equation. It is to be used for predicting the relationship between the variables. This step is prone to human error. Presently, a software can do it seamlessly and in no time at all. Step 5: Come Up with an Exact Number The next step is to compute a co-relation co-efficient. This number is the best metric to understand correlation. It lies between -1 and +1. The software will work it out and give you a correlation co-efficient. Even an MS Excel sheet can be used for this step. Step 6: Interpret the Number The last step is to construe the number. Anything above + or – 0.5 suggests a robust correlation. 0 represents no correlation. -1 or +1 represents faultless co-relation. A perfect correlation is an indicator for causation. But, it does not imply causation by itself. Read the full article
0 notes
ayejayque · 2 years ago
Text
Six Sigma Tools - SIPOC Matrix & Fishbone Diagram
Tumblr media
Supplier, Input, Process, Output and Customer (SIPOC) - Matrix The SIPOC Matrix is perhaps the most important tool that are used at the start of any Six Sigma project. People who know about what SIPOC does, vouch for its usefulness. Some of the main usages of SIPOC are: Levels of Process: When we use the SIPOC Chart, we ensure that there is accord about the process being discovered. Processes operate at various levels. It can be problematic if the project team members have differing views about the process in question.  SIPOC saves the day here. People who have doubts can always consult the SIPOC. Scope of Project: Processes are perpetual & continuous. When one ends, the other starts. All businesses are based of links between the input and output stages. Some people might argue that the recruitment process starts with the ads placed in the newspapers. Some might say that it begins after the candidate has arrived for an interview. Both might be correct. This ambiguity needs to be removed before the project starts. Ambiguity in the middle of a Six Sigma project can be disastrous, both for the people and the firm. SIPOC ensures that the scope of the project is known to all. People know exactly when the process starts, ends, & its purpose. Value Stream Maps: The SIPOC chart is helpful for creating Value Stream Maps. The SIPOC Matrix makes sure that the flow of material and information has been outlined step by step and chronicled. This is used in the value stream map to come up with an “as-is” process, and a “to-be” process. Customer as a Supplier: SIPOC helps to understand information loops. Sometimes the client is the supplier. This is because he gives vital information in the form of needs. In such times, correct and material information should be collected from them. Entities Can be Suppliers or Customers: Sometimes, suppliers are customers too. When placing an order with the supplier, you need to give him inputs as needs. This way, the suppliers are customers in this process. Such changes of roles are common in processes. A SIPOC structures the process, so that everyone involved know and understand it fully. Creating a SIPOC Chart The SIPOC Chart, being one of the fundamental documents of any process improvement project must be developed in such a manner that it is easy to understand. It must also be coherent with the logic inbuilt in the process. There is a method which has been described to ensure the proper development of the SIPOC Chart. The process is as follows: - Start with Process:Flowcharts display the stages of the process. When the process has been defined, there will be little or no confusion. The start is a good time to start a SIPOC Matrix. List down all activities and events as in the process map. The verified level also needs to be listed down. - List the Metrics Measured at Process Steps:A well-defined process is one where we know how it will behave. Anything that can be measured, can also be managed. Accurate metrics must be developed that show you the exact state of the process. It is advised that all metrics are known before the SIPOC process finishes. If this is not done, an iteration would be needed and time will be wasted. - Complete Outputs with Measurements:We must also know about the ideal output. This is difficult to because sometimes the customers do not tell us completely about their needs. It must be ensured that the outputs are described objectively. There has to be an agreement between the process owners and the customers vis-à-vis the objectives. These objectives need to be clearly defined. We must map the outflow both for materials and information. - Complete Inputs with Measurements:When you know your outputs, work in reverse the inputs that are needed. Consider both materials and information that are needed for the level functioning of the process. When mapping inputs, also know the suppliers that will provide them to you. Do this at each and every stage of the process. This will complete the matrix. - Prioritize and Plan:Once your SIPOC matrix has been completed, you need to know that all inputs or outputs do not have the same importance for the customer. Some outputs are more important than others, to the customers. The critical outputs need a closer microscopic look. This is what outputs & customers must be ranked and listed. Same is to be done with inputs and suppliers. SIPOC gives insight into the process. Insight is the basis of all innovation.How to Use a Fishbone Diagram The fishbone diagram, is also known as the Cause-and-Effect diagram and the Ishikawa diagram. It is one of the seven basic tools of quality management. It is important to all six sigma projects. The Fishbone diagram is a simple yet highly effective tool for solving problems. How We Usually Solve Problems: The management solves all problems but they are not very efficient at the exercise. They sometimes, do the acknowledge that a problem does exist. Sometimes they know the problem but do not have the insight. Fishbone diagrams are the remedy for this. Teams have workers, technical staff, management, support functions staff. This makes them cross-functional. To be Used in Teams at Brainstorming Sessions: Once a cross functional team is finalized, a brainstorming session is organized. This is where a Fishbone diagram comes good. This helps to have structured inputs from the members of the team. What Exactly Does a Fishbone Diagram Do? The fishbone diagram looks like a fishbone. An issue is penned at the far-right hand side of the diagram. A central line is drawn from the left going towards the issue. This line branches into several lines, each of which represents a class of problems. By classifying problems, we know that they might have similar root causes. This is how we can solve issues more effectively by utilizing minimal resources. Categories Give Structure to Thinking: Brainstorming has no fixed formula. It can be done without a Fishbone diagram too. Fishbone diagrams give structure to the thoughts. This is because they have pre-defined classes. When one brainstorms, they have a specific way to see the problem. Only One Issue per Diagram: The downside of a Fishbone diagram is that there is only one issue per diagram. In quality management we know that causes and effects are rarely seen in one layer. Sometimes the use of fishbone diagram may be problematic. This is when one issue leads to another and this becomes a downward spiral. Many diagrams may be needed problem solving may become intricate. How to Draw a Fishbone Diagram? All Six Sigma and quality students must know the fishbone diagram and how to use it. Let’s see how a fishbone diagram is drawn: Step 1 - List the Effect (Problem) on the Right: The very first step is to write the effect and draw an arrow from the left pointing towards the problem on the right. Step 2 - Branch Out into Categories: From the central line, many classes branch out. These classes are custom defined. The definition of the classes is the prime in the brainstorming exercise. Choosing the wrong classes can have negative implications. Mostly, pre-defined categories are used. Some of the most common categories are: The 8 Ms of Manufacturing: (Famously Used By Toyota) - Machine (technology). - Method (process). - Material (Includes Raw Material, Consumables & Information). - Manpower (physical work)/ Mindpower (brain work). - Measurement (Inspection). - Milieu/Mother Nature (Environment). - Management/Money Power. - Maintenance. The 8 Ps of the Service Industry: - Product=Service - Price - Place - Promotion/Entertainment - People - Process - Physical Evidence - Productivity & Quality The 5 Ps of the Service Industry: - Surroundings - Suppliers - Systems - Skills - Safety Specific Causes as Branches: Once categories are created, it is time for brainstorming. One way is to select a class and go to every person for suggestions. Everyone can make only one suggestion per round. If they do not have one, they pass. This must continue till majority in the group say pass. When all causes have been listed out as sub-branches, move on to the next class. Repeat till the diagram is complete. Refine and Highlight: When the diagram is full and brainstorming is complete, refine the causes. This is because sometimes many causes come to the fore. We must prioritize them. This ensures that there is an agreement on what is important to the process. Repeat after a Few Days: A fishbone diagram has to become a regular feature of the shop-floor. Toyota uses them year in-year out.  Participants are advised to mull over the issues and tell the group if they have developed some insights. Frequent use and firm focus produce awesome results. Read the full article
0 notes