Tuesday, October 12, 2010

Notes on Quality Management Tools and Methods

(Assembled from several WIKI sources, these tools help for ideation, production, and marketing; in some cases, they help in overviewing all of them together, so there might be some knowledge to discover about a de facto business model.)

There are many established quality-management tools.
Here’s an overview of some methods used.

* 5 Whys – When discussing a breakdown, discuss why it happened; once that reason is established, ask why that first reason happened, and so on, subsequent times. It often reveals underlying attitudes or undeveloped ideas about maintenance that can be fixed to reduce future breakdowns.

* Analysis of variance (ANOVA) – Used to determine if there is a single issue or multiple issues at hand in variance of process results; can be used to prioritize areas for improvement.

* ANOVA Gauge R&R – (Repeatability and reproducibility), important factors are measuring instruments, operators (people), test methods, specification, parts (or specimens being measured). Repeatability is variation in measurements taken by a single person or instrument on the same item under the same conditions. Reproducibility is the variability induced by the operators measuring the same part.
* Axiomatic design – A systems design methodology analyzing the transformation of customer needs into functional requirements, design parameters, and process variables. Design axioms (without proof) are used. This addresses fundamental issues in Taguchi methods. The theory’s creator, Professor Nam Suh, says: “The goal of axiomatic design is to make human designers more creative, reduce the random search process, minimize the iterative trial-and-error process, and determine the best design among those proposed.” It uses a dependency structure matrix (DSM).
Axiomatic design is a decomposition process going from customer needs to FRs, to DPs, and then to process variables (PVs), thereby crossing the four domains of the design world: customer, functional, physical, and process.
In decomposing the design, a designer first “explodes” higher-level FRs into lower-level FRs, proceeding through a hierarchy of levels until a design can be implemented. At the same time, the designer “zigzags” between pairs of design domains, such as between the functional and physical domains. Ultimately, zigzagging between “what” and “how” domains reduces the design to a set of FR, DP, and PV hierarchies.
There are these two axioms: the independence axiom and the information axiom. (From these two axioms come a bunch of theorems that tell designers “some very simple things,” says Suh. “If designers remember these, then they can make enormous progress in the quality of their product design.”) The first axiom says that the functional requirements within a good design are independent of each other. This is the goal of the whole exercise: Identifying DPs so that “each FR can be satisfied without affecting the other FRs,” says Suh.
The second axiom says that when two or more alternative designs satisfy the first axiom, the best design is the one with the least information. That is, when a design is good, information content is zero. (That’s “information” as in the measure of one’s freedom of choice, the measure of uncertainty, which is the basis of information theory.) “Designs that satisfy the independence axiom are called uncoupled or decoupled,” explains Robert Powers of Axiomatic Design Software, Inc. “The difference is that in an uncoupled design, the DPs are totally independent; while in a decoupled design, at least one DP affects two or more FRs. As a result, the order of adjusting the DPs in a decoupled design is important.”
* Business Process Mapping – Defining what a business entity does, who is responsible, to what standard a process should be completed, and how the success of the business process can be determined – reducing uncertainty as to the requirements of every internal business process. Thus a business process illustration/map can be produced.

* Cause & effects diagram (also known as “fishbone” or Ishikawa diagram) – This is used in the cop show, “Without a Trace,” where a horizontal timeline is drawn on a whiteboard, and as the detectives discover possible relevant events in the missing person’s life, they are plotted on the board. But also, if you categorize everything that goes into a product, sources of measurement, materials, personnel, environment, methods, and machines, you can discover where more attention needs to go to reduce defects.

* Chi-square test of independence and fits – a statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true, or any in which this is asymptotically true. Pearson’s chi-square test, or goodness-of-fit. The chi-square statistic is calculated by finding the difference between each observed and theoretical frequency for each possible outcome, squaring them, dividing each by the theoretical frequency, and taking the sum of the results. A second important part of determining the test statistic is to define the degrees of freedom of the test: this is essentially the number of observed frequencies adjusted for the effect of using some of those observations to define the "theoretical frequencies".
* Control chart – used to determine whether or not a manufacturing or business process is in a state of statistical control. If it is, then data from the process can be used to predict the future performance of the process. If the chart indicates that the process being monitored is not in control, analysis of the chart can help determine the sources of variation, which can then be eliminated to bring the process back into control. A control chart is a specific kind of run chart that allows significant change to be differentiated from the natural variability of the process.
The control chart can be seen as part of an objective and disciplined approach that enables correct decisions regarding control of the process, including whether or not to change process control parameters. Process parameters should never be adjusted for a process that is in control, as this will result in degraded process performance.
* Correlation (of dependence) – Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. Correlations can also suggest possible causal, or mechanistic relationships; however, statistical dependence is not sufficient to demonstrate the presence of such a relationship.
Formally, dependence refers to any situation in which random variables do not satisfy a mathematical condition of probabilistic independence.
* Cost-benefit analysis – weighing the total expected costs against the total expected benefits of one or more actions in order to choose the best or most profitable option. Cost–benefit analysis is often used by governments to evaluate the desirability of a given intervention. It is heavily used in today's government. It is an analysis of the cost effectiveness of different alternatives in order to see whether the benefits outweigh the costs. The aim is to gauge the efficiency of the intervention relative to the status quo.

* CTQ tree – (Critical to Quality tree) decomposes broad customer requirements into more easily quantified requirements. Can be qualitative (delight) to qualitative (cost).

* Design of experiments (DOE) – is the design of any information-gathering exercises where variation is present, whether under the full control of the experimenter or not. However, in statistics, these terms are usually used for controlled experiments. Other types of study, and their design, are discussed in the articles on opinion polls and statistical surveys (which are types of observational study), natural experiments and quasi-experiments (for example, quasi-experimental design).

* Failure mode and effects analysis (FMEA) – is a procedure in product development and operations management for analysis of potential failure modes within a system for classification by the severity and likelihood of the failures. A successful FMEA activity helps a team to identify potential failure modes based on past experience with similar products or processes, enabling the team to design those failures out of the system, thereby reducing development time and costs. It is widely used in manufacturing industries in various phases of the product life cycle and is now increasingly finding use in the service industry. Failure modes are any errors or defects in a process, design, or item, especially those that affect the customer, and can be potential or actual. Effects analysis refers to studying the consequences of those failures.

* General linear model – The general linear model (GLM) is a statistical linear model. It may be written as:
Y == XB + U
where Y is a matrix with series of multivariate measurements, X is a matrix that might be a design matrix, B is a matrix containing parameters that are usually to be estimated and U is a matrix containing errors or noise. The errors are usually assumed to follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U.
The general linear model incorporates a number of different statistical models: ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary linear regression, t-test and F-test. If there is only one column in Y (i.e., one dependent variable) then the model can also be referred to as the multiple regression model (multiple linear regression).
Hypothesis tests with the general linear model can be made in two ways: multivariate or as several independent univariate tests. In multivariate tests the columns of Y are tested together, whereas in univariate tests the columns of Y are tested independently, i.e., as multiple univariate tests with the same design matrix.
* Histograms – An intervalized and blocked-out bell curve, for the most part. In statistics, a histogram is a graphical representation, showing a visual impression of the distribution of experimental data. It is an estimate of the probability distribution of a continuous variable and was first introduced by Karl Pearson [1]. A histogram consists of tabular frequencies, shown as adjacent rectangles, erected over discrete intervals (bins), with an area equal to the frequency of the observations in the interval. The height of a rectangle is also equal to the frequency density of the interval, i.e., the frequency divided by the width of the interval.

* Quality Function Deployment (QFD) – a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.” The technique is also used to identify and document competitive marketing strategies and tactics (see example QFD House of Quality for Enterprise Product Development.)

* Pareto chart – a type of chart that contains both bars and a line graph, where individual values are represented in descending order by bars, and the cumulative total is represented by the line. Shows relative value of intervals/categories.
* Pick chart -- When faced with multiple improvement ideas a PICK chart may be used to determine the most useful. There are four categories on a 2*2 matrix; horizontal is scale of payoff (or benefits), vertical is ease of implementation. By deciding where an idea falls on the pick chart four proposed project actions are provided; Possible, Implement, Challenge and Kill (thus the name PICK).
Low Payoff, easy to do - Possible
High Payoff, easy to do - Implement
High Payoff, hard to do - Challenge
Low Payoff, hard to do - Kill
The vertical axis, representing ease of implementation typically includes some assessment of cost to implement as well. More expensive actions can be said to be more difficult to implement.
* Process capability – The input of a process is expected to meet customer requirements, specifications, or product tolerances. Engineering can conduct a process capability study to determine the extent to which the process can meet these expectations.
The ability of a process to meet specifications can be expressed as a single number using a process capability index or it can be assessed using control charts. Either case requires running the process to obtain enough measurable output so that engineering is confident that the process is stable and so that the process mean and variability can be reliably estimated. Statistical process control defines techniques to properly differentiate between stable processes, processes that are drifting (experiencing a long-term change in the mean of the output), and processes that are growing more variable. Process capability indices are only meaningful for processes that are stable (in a state of statistical control).
* Quantitative marketing research through Enterprise Feedback Management (EFM) – QMR is the application of quantitative research techniques to the field of marketing. It comes from both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.
EFM is a system of processes that enables organizations to centrally manage deployment of surveys while dispersing authoring and analysis throughout an organization. EFM systems typically provide different roles and permission levels for different types of users, such as novice survey authors, professional survey authors, survey reporters and translators.
EFM can help an organization establish a dialogue with employees, partners, and customers regarding key issues and concerns and potentially make customer specific real time interventions. EFM consists of data collection, analysis and reporting.
Prior to EFM, survey software was typically deployed in departments and lacked user roles, permissions and workflow. EFM enables deployment across the enterprise, providing decision makers with important data for increasing customer satisfaction, loyalty and lifetime value.[1] EFM enables companies to look at customers "holistically" and to better respond to customer needs.
* Regression analysis – includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps us understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables — that is, the average value of the dependent variable when the independent variables are held fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function, which can be described by a probability distribution.
Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances, regression analysis can be used to infer causal relationships between the independent and dependent variables.

* Root cause analysis – (RCA) is a class of problem solving methods aimed at identifying the root causes of problems or incidents. The practice of RCA is predicated on the belief that problems are best solved by attempting to correct or eliminate root causes, as opposed to merely addressing the immediately obvious symptoms. By directing corrective measures at root causes, it is hoped that the likelihood of problem recurrence will be minimized. However, it is recognized that complete prevention of recurrence by a single intervention is not always possible. RCA, initially is a reactive method of problem detection and solving. This means that the analysis is done after an incident has occurred. By gaining expertise in RCA it becomes a pro-active method. This means that RCA is able to forecast the possibility of an incident even before it could occur. While one follows the other, RCA is a completely separate process to Incident Management.

* Run charts – also known as a run-sequence plot is a graph that displays observed data in a time sequence. Often, the data displayed represent some aspect of the output or performance of a manufacturing or other business process. Run sequence plots[1] are an easy way to graphically summarize an univariate data set. A common assumption of univariate data sets is that they behave like:[2]
• random drawings;
• from a fixed distribution;
• with a common location; and
• with a common scale.
With run sequence plots, shifts in location and scale are typically quite evident. Also, outliers can easily be detected.

* SIPOC analysis (Suppliers, Inputs, Process, Outputs, Customers) – An overview of a process, for example:
1. Suppliers - grocers and vendors
2. Inputs - ingredients for recipes
3. Process - cooking at a restaurant kitchen
4. Outputs - meals served
5. Customers - diners at a restaurant
* Taguchi methods -- Taguchi methods are statistical methods developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to, engineering,[1] biotechnology,[2][3] marketing and advertising.[4] Professional statisticians have welcomed the goals and improvements brought about by Taguchi methods, particularly by Taguchi's development of designs for studying variation, but have criticized the inefficiency of some of Taguchi's proposals.[5]
Taguchi's work includes three principal contributions to statistics:
• A specific loss function — see Taguchi loss function;
• The philosophy of off-line quality control; and
• Innovations in the design of experiments.
* Taguchi Loss Function -- is a graphical depiction of loss developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. Praised by Dr. W. Edwards Deming (the business guru of the 1980s American quality movement),[1] it made clear the concept that quality does not suddenly plummet when, for instance, a machinist exceeds a rigid blueprint tolerance. Instead "loss" in value progressively increases as variation increases from the intended condition. This was considered a breakthrough in describing quality, and helped fuel the continuous improvement movement that since has become known as lean manufacturing.
The Taguchi Loss Function is important for a number of reasons. It helps engineers better understand the importance of designing for variation. It drives an improved understanding of the importance of Variation Management (a concept described in Breaking the Cost Barrier). Finally, It was important to describing the effects of changing variation on a system, which is a central characteristic of Lean Dynamics, a business management discipline focused on better understanding the impact of dynamic business conditions (such as sudden changes in demand seen during the 2008-2009 economic downturn) on loss, and thus on creating value.

* TRIZ -- is "a problem-solving, analysis and forecasting tool derived from the study of patterns of invention in the global patent literature" It was developed by Soviet engineer and researcher Genrich Altshuller and colleagues, beginning in 1946. In English the name is typically rendered as "the theory of inventive problem solving" and occasionally goes by the English acronym "TIPS". The approach identifies generalisable problems and borrows solutions from other fields. TRIZ practitioners aim to create an algorithmic approach to the invention of new systems, and the refinement of old systems.
TRIZ is variously described as a methodology, tool set, knowledge base, and model-based technology for generating new ideas and solutions for problem solving. It is intended for application in problem formulation, system analysis, failure analysis, and patterns of system evolution.
The TRIZ process presents an algorithm for the analysis of problems in a technological system. The fundamental view is that almost all "inventions" are reiterations of previous discoveries already made in the same, or other, fields, and that problems can be reduced to contradictions between two elements. The goal of TRIZ analysis is to achieve a better solution than a mere trade-off between the two elements, and the belief is that the solution almost certainly already exists somewhere in the patent literature.

0 Comments:

Post a Comment

<< Home