- Alpha Risk - The maximum risk or probability of making a Type I Error. This probability is always greater than zero, and is usually established at 5%. The researcher makes the decisions to the greatest level of risk that is acceptable for a rejection of Ho.
- Alpha Risk - the risk of rejecting the null when the researcher should have failed to reject the null hypothesis.
- Alternative Hypothesis - A statement of change or difference. This statement is considered true if Ho is rejected.
- ANOVA - Analysis of variance
- Attribute - A characteristic that may take on a discrete value that can be ordinal, nominal, or binary. examples 0 or 1, yes / no, on / off.
- Audit - To examine and check for compliance and adherence to standards or operating procedures.
- Average - For a data set, the average is the sum of all values divided by the total number or count of values (see also, Mean)
- Background Variables - Variables which are of no experimental interest and are not held constant. Their effects are often assumed insignificant or negligible and if not, they are randomized to ensure that confounding of the primary response does not occur.
- Balanced Design - An experimental design where all treatment combinations have the same number of observations.
- Bartlett's Test - Test used to compare the variances among two or more normally distributed populations
- Baseline - The current or most recent or relevant output response of a process or measurement. It's the measurement that is used to indicate the starting point or what might be referred to as the before measurement when changes or improvements are implemented.
- Beta Risk - The risk or probability of making a Type II Error. Beta risk is the risk of failing to reject the null when in fact you should have rejected it. In production terms beta risk is the risk of letting a bad product get through to your customer.
- Bias - Accuracy, difference between average measurement and true value.
- Black Belt - A Black Belt is an individual who has demonstrated mastery of Six Sigma and/or Lean Six Sigma principals and curriculum. Black Belts are typically leaders of projects, teams or programs that employ the use of Six Sigma, Lean and other quality management methods.
- Blocking Variables - A relatively homogenous set of conditions within which different conditions of the primary variables are compared. Used to ensure that background variables do not contaminate the evaluation of primary variables.
- Bottleneck - a location in the chain of a complete process where the flow of production slows downs and impedes the continuation of the process.
- Boxplot - Graphical representation of a distribution of data. Boxplots display a vertical box which represents the middle 50% of the distributions data (bottom line of the box is the 25th percentile and the top line of the box is the 75th percentile.) A horizontal line through the box typically indicates median.
- Brainstorming - The act or practice of generating a wide variety of ideas from all participating parties without criticism or judgment.
- Brown-Forsythe or Levene's Test - A test used to compare the variances between two or more populations with any distributions
- Calibrate - To adjust the scale of a gauge or other measuring instrument by comparison with a standard instrument.
- Common Cause Variables – Variables which are of no experimental interest and are not held constant. Their effects are often assumed insignificant or negligible and if not, they are randomized to ensure that confounding of the primary response does not occur.
- Cause and Effect Diagram - Also referred to as a Fishbone diagram or Ishikawa Diagram. It's a technique useful in problem solving. Possible causes from such sources as materials, machines, methods, and personnel are identified as starting points.
- Center Line - The line on a statistical process control chart that represents the process mean or grand mean.
- Center Points - Points at the center value of all factor ranges.
- Central Tendency - Typically used in reference to measures of central tendency which are measures that describe the central region of a distribution e.g. mean, median, mode.
- Champion - A member of senior management who is responsible for the business aspects of the program.
- Characteristic - A definable or measurable feature of a process, product, or variable.
- Chi-Squared Test - Test whether two factors are independent of each other. If there is a statistically significant relationship between the two factors.
- Common Cause - A source of variation which is typical of the process. An inherent natural source of variation.
- Confidence Interval - Statisticians use a confidence interval to express the degree of uncertainty associated with with a sample
- Confidence Level - A confidence level refers to the likelihood that the true population parameter lies within the range specified by the confidence interval . The confidence level is usually expressed as a percentage. Thus, a 95% confidence level implies that the probability that the true population parameter lies within the confidence interval is 0.95.
- Confounding - Confounding occurs when explanations of relationships between independent and dependent variables can not be observed or measured.
- Continuous Random Variable - Continuous random variables can take on any value within a range of values.
- Continuous Variable - If a variable can take on any value between two specified values, it is called a continuous variable; otherwise, it is called a discrete variable.
- Control Chart - Control charts are graphical tools to present and analyze the process performance in statistical process control. Control charts are used to detect special cause variation and determine whether the process is in statistical control (stable).
- Controllable X - An input or independent variable that can be manipulated or changed in order to measure its effect on the response variable.
- Corrective Action - An action taken to set things right, make better, to correct an error.
- Correlation - Correlation is a statistical technique that describes whether and how strongly two or more variables are related.
- Cp - Cp is a process capability index. It measures the processes potential capability to meet two-sided specifications. Cp doesn't take process average into consideration and it measures within subgroup variation.
- Cpk - Cpk is a process capability index. It measures the processes actual capability by taking both the variation and average of the process into consideration. Cpk considers within-subgroup variation in its calculation.
- Cpl - Cp lower spec limit
- Cpu - Cp upper spec limit
- Crossed Factors - Variables that can have more than one parent or belong to more than one hierarchy of variables above it.
- Cycle Time - The time required to complete one cycle of an operation. If cycle time for every operation in a complete process can be reduced to equal takt time, products can be made in single-piece flow.
- Database - A large collection of records stored on a computer system from which specialized data may be extracted or organized as desired.
- Degrees of Freedom - The number of degrees of freedom generally refers to the number of independent observations in a sample minus the number of population parameters that must be estimated from sample data.
- Dependent Variable - The dependent variable is the response or output variable. Dependent variables are the Y in the Y=f(x) transfer function.
- Detailed Process Map - A graphical representation of the flow of a process (starting and ending points, action steps, decision points etc.). A detailed process map that contains multiple levels of similar but more information beneficial to improving the process.
- Detectability - In reference to the FMEA, detectability is the likelihood that your controls will detect when the x fails or when the failure mode occurs.
- Discrete Random Variable - If a variable can take on any value between two specified values, it is called a continuous random variable; otherwise it is called a discrete random variable.
- Discrete Variable - If a variable can take on any value between two specified values, it is called a continuous variable; otherwise, it is called a discrete variable.
- Dotplot - A statistical analysis method comprised of dots on a chart that has a vertical (y) and horizontal (x)axis. Dots are placed on the chart at the intersection of the x and y value. Dotplots can also be arranged with only one axis to form a type of Historgram.
- DPU - Defects per unit. The count of defects divided by the number of units or products produced.
- Effect - The main effect of a factor in a two-level factorial experiment is the mean difference in responses between the two levels of the factor, which is averaged over all levels of the other factors.
- Error - Unexplained variation in a collection of observations.
- Experiment - A test under controlled conditions to determine the effects of independent variables on dependent variables. In an experiment, a researcher manipulates one or more variables, while holding all other variables constant. By observing how the manipulated variables affect a response variable, the researcher can test whether a causal relationship exists between the variables.
- Experimental Error - Variation in observations made under identical test conditions. Also called residual error. The amount of variation which cannot be attributed to the variables included in the experiment. Common cause variation during the experiment.
- F-Test - Test used to compare the variances between two normally distributed populations.
- Factors - Independent variables, usually the x variables in an experiment. Factors may have several levels or settings. In DOE it is common for factors to have 2 levels.
- Failure Cause - In FMEA, failure causes are the factors or variables that are thought to have influence such that they cause failures.
- Failure Effect - In FMEA, failure effects are the results or effects of failures identified in through the use of Failure Modes and Effects Analysis.
- Failure Mode - A description of a non-conformance at a particular process step.
- Fishbone - Also referred to as a cause and effect diagram or Ishikawa Diagram. It's a technique useful in problem solving. Possible causes from such sources as materials, machines, methods, and personnel are identified as starting points.
- Fixed Effect - An effect associated with an input variable that has a limited number of levels or in which only a limited number of levels are of interest to the experimenter.
- FMEA - Failure Modes and Effects Analysis. A detailed procedure that documents process steps and all the possible ways in which that process can potentially fail, the causes of those failures, the effects of failures and the ability to detect when or if the failures occur.
- FTY - First Time Yield. FTY is simply the number of good units produced divided by the number of total units going into the process.
- Gage R&R - Gage Repeatability and Reproducibility is a method of measurement systems analysis that determines the proportion of variation in a measurement system that is attributable to repeatability, reproducibility, part to part variation and/or the variation caused by the measurement device itself.
- Green Belts - A Green Belt is an individual who has demonstrated understanding of Six Sigma and/or Lean Six Sigma principals and curriculum at the Green Belt level. Green Belts are typically leaders of individual projects, members of Black Belt project teams or programs. Green Belts use Six Sigma and the DMAIC methodology in their daily work lives.
- Hidden Factory - Hidden activities or sub-processes that inefficiently exist within a process. Hidden factories are undesirable and need to be found and eliminated. They are usually work arounds that people have implemented over time.
- Histogram - a graphical representation of the frequency distribution of data. Arranges data in bar charts according to its location and frequency. Used to quickly see the shape and variation (spread) of the data.
- Historical Data - Data that has been gathered over a period of time and used or displayed in the present to learn and evaluate.
- Homogeneity of Variance - A test used to determine if the variances of two or more samples are different.
- Homoscedasticity - In statistics, a sequence or a vector of random variables is homoscedastic if all random variables in the sequence or vector have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity.
- House of Quality (HOQ) - A product planning matrix that is developed during a Design for Six Sigma (DFSS) event and shows the relationship of customer requirements to the means of achieving these requirements.
- Independent Variable - The independent variable is the input variable. Independent variables are the x's in the Y=f(x) transfer function.
- Input - Variables that go into a process to produce a product or output.
- Instability - Process data with either unnaturally large fluctuations, trends or shifts suggesting special cause conditions exist.
- Interaction - Occurs when the effect of one factor on a response depends on the level of another factor(s).
- Interquartile Range - The range between the first and third quartiles of a ranked data set.
- Just in Time - A system for producing and delivering the right items at the right time in the right amounts. The key elements of just in time are flow, pull, standard work and takt time.
- Kanban A pull production scheduling system to determine when, what and how much to produce based on demand, reducing waste and increase speed of response to demand.
- Kruskal-Wallis Test - A one-way analysis of variance hypothesis test to compare the medians among more than two groups.
- Lack of Fit Error - Error that occurs when the analysis omits one or more important terms or factors from the process model.
- LCL - Lower Control Limit is the lower bound of a statistical process control chart that is set by the specific calculation of that particular control chart. As a general rule, LCL's will be approximated as 3 sigma below the center line.
- Line Charts - Simple charts created with a line representing data used to track the performance over time. No relationships to process capability or control limits or input variables can be derived from simple line charts (see also Run Chart).
- LSL - Lower Specification Limits are not to be confused with Lower Control Limits. LSL's are the lower bounds of a customer’s specification. LSL's are set by the quantifiable demands of a customer.
- Mann-Whitney Test - Compares the medians of two populations which are not normally distributed
- Master Black Belt - A Master Black Belt is an individual who has demonstrated the highest degree of mastery of Six Sigma and/or Lean Six Sigma principals and curriculum. Master Black Belts are typically program leaders, deployment leaders, executives or other high ranking officials of Lean Six Sigma programs or quality management programs.
- Mean - The arithmetic mean is the sum of all data points divided by the total number of data points (see also, Average).
- Measurement Error - Measurement error is the difference between an observed value of quantity and its true value. In statistics, an error is not a mistake. Variability is an inherent part of things being measured and of the measurement process.
- Measures of Scale - Quantifiable techniques used to understand variability. Common measures of scale are Range, Variance and Standard Deviation
- Median - The median is the middle value of the data set in numeric order. It separates a finite set of data into two even parts, one with values higher that the median and the other with values lower than the median.
- Metric - An analytical measurement intended to quantify the state of a system. For example 'population density' is one metric which may be used to describe a city.
- Mode - The mode is the most common or frequently observed value in a sample or population.
- Moods Median - Compares the medians of two or more populations. It is an alternative to Kruskal-Wallis
- MSA - Measurement Systems Analysis is the practice of validating the accuracy, stability or bias of a measurement system and/or device.
- Muda - A Japanese word that means 'waste'. It refers to any activity that consumes resources but creates no value.
- Multicollinearity - When two or more independent variables in a multiple regression model are correlated with each other.
- Multiple Linear Regression - Has two or more predictors
- Nested Factors - A factor 'A' is nested within another factor 'B' if the levels or values of 'A' are different for every level or value of 'B'. Nested factors or effects have a hierarchical relationship.
- Nominal Data - Data arranged in unordered categories which indicate membership or non-membership with no implication of quantity, i.e. part numbers, colors, states etc.
- Non-Value-Added - Non-Value-Added (NVA), something a customer does not value.
- Nonconforming Unit - A unit which does not conform to one or more specifications, standards, and/or requirements.
- Nonconformity - A condition within a unit which does not conform to some specific specification, standard, and/or requirement; often referred to as a defect; any given nonconforming unit can have the potential for more than one nonconformity. Nonconformities do not necessarily render a unit fully defective.
- Normal Distribution - A continuous, symmetrical density function characterized by a bell-shaped curve, e.g., distribution of sampling averages. With normal distributions mean=median=mode.
- NP Chart - A type of control chart that plots the number of defectives per sub group.
- Null Hypothesis - A statement of no change or no difference. The null hypothesis statement is assumed true until sufficient evidence is presented to reject it.
- Occurrence - In an FMEA, occurrence is a relative and subjective measure of the frequency of a failure event.
- One Piece Flow - The opposite of batch production. Instead of building many products and then holding them in queue for the next step in the process, products go through each step in the process one at a time, without interruption. It improves quality and lowers costs.
- One Sample Proportion - A hypothesis test to compare the proportion of one certain outcome occurring in a population following the binomial distribution with a specified proportion.
- One Sample Sign - Compares the median of a population with a specific value.
- One Sample Wilcoxon Test - A hypothesis test to compare the median of one population with a specified value. It is an alternative test of one sample t-test when the distribution of the data is non-normal. It is more powerful than one sample sign test but it assumes the distribution of the data is symmetric.
- Operational Definition - A definition that has meaning only as it applies to a particular operation or application.
- Ordinal Data - Data arranged in ordered categories (ranking) with no information about distance between each category, i.e., rank ordering of several measurements of an output parameter.
- Orthogonality - Two vectors of the same length are orthogonal if the sum of the products of their corresponding elements is 0.
- Outlier - A data point that does not fit a population or sample. Outliers are erroneous readings or data points that are not representative of the sample or population they are being compared with.
- Output - An output is a response or result
- P Chart - A control chart used to plot percent defectives of samples over time. Samples do not need to have equal sub-group sizes.
- p-value - The probability of being wrong if the null hypothesis was rejected when in fact it was true. The probability of obtaining a more extreme value of the test statistic by chance if the null hypothesis was true.
- Parameter - A parameter is used to identify a characteristic, a feature, a measurable factor that can help in defining a particular system. It is an important element to take into consideration for the evaluation or for the comprehension of an event, a project or any situation.
- Pareto Chart - Bar chart in which the relative importance or frequency of problems or possible causes of a problem are displayed in descending order. Pareto charts display total defects, cumulative defects, defects by category and cumulative defect percentages.
- Poka Yoke - A mistake-proofing device, method or procedure that prevents a defect from passing on to the next operation or process.
- Population - A set of entities concerning which statistical inferences are to be drawn, often based on a sample taken from that population. Populations are considered the entire set or universe of entities.
- Power - The ability of a statistical test to detect a real difference when there really is one, or the probability of being correct in rejecting the null hypothesis. Commonly used to determine if sample sizes are sufficient to detect a difference in treatments, if one exists.
- Pp - Pp is a process capability index. It measures the processes potential capability to meet two-sided specifications. Pp doesn't take process average into consideration and it measures overall or total variation.
- Ppk - Similar to Cpk, Ppk measures the process capability by taking both the variation and the average of the process into consideration. Ppk uses total standard deviation in its calculation.
- PPM - Parts per Million
- Prevention - The practice of eliminating unwanted failures or variation. Predicting a future condition and applying corrective action before the predicted event occurs.
- Primary Metric - The critical measure of process. The 'Y' in the Y=f(x) equation. Primary metrics are the single most important measures of processes or projects.
- Probability Distribution - (or Distribution) describes the range of possible events and the possibility of each event occurring. In statistical terms, distribution is the probability of each possible value of a random variable when the variable is discrete, or the probability of a value falling in a specific interval when the variable is continuous.
- Process Map - A graphical representation of the flow of a process (starting and ending points, action steps, decision points etc.).
- Process Spread - The range of values which a given process characteristic display. This particular term most often applies to the range but may also encompass the variance. The spread may be based on a set of data collected at a specific point in time or may reflect the variability across a given amount of time.
- PVR - Process Variability Reduction is an approach to improving the process by removing variability. Variability can be anything from doing the same task two different ways to a piece of equipment that doesn't always run right. The goal of pvr is to remove variability from the process and end up with a more dependable, predictable and consistent process.
- Quartile - The first quartile of a data series is the number such that one quarter of the numbers in the series are below it, the third quartile number such that three quarters of the numbers are below it, the second quartile is the same as the median.
- Random - Selecting a sample so each item in the population has an equal chance of being selected, lack of predictability, without pattern.
- Random Cause - See Common Cause
- Random Effect - An effect associated with input variables chosen at random from a population having a large or infinite number of possible values.
- Random Effects Model - Experimental treatments are a random sample from a larger population of treatments. Conclusions can be extended to the population. Inferences are not restricted to the experimental levels.
- Random Error - Error that occurs due to natural variation in the process. Random error is typically
- Random Sample - One or more samples randomly selected from a population.
- Random Variable - A variable which can assume any value from a set of possible values.
- Random Variations - Variations in data which result from causes which cannot be pinpointed or controlled.
- Randomization - A schedule for allocating treatment material and for conducting treatment combinations in a DOE such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs. Randomization is necessary for conclusions drawn from the experiment to be correct, unambiguous and defensible.
- Randomness - A condition in which any individual event in a set of events has the same mathematical probability of occurring as all other events within the specified set. i.e. Individual events are not predictable even though they may collectively belong to a definable distribution.
- Range - The difference between the highest and lowest values in a set of values.
- Rank - Ordinal values assigned to items in a sample to determine their relative occurrence in a set of values.
- Ratio Scale - Most measurements in the physical sciences and engineering is done on ratio scales. Mass, length, time, plane angle, energy and electric charge are examples of physical Measures that are ratio scales. The scale type takes its name from the fact that measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind
- Regression Equation - A prediction equation, not necessarily linear, that allows the values of inputs to be used to predict a corresponding output.
- Repeatability - Variation in measurements obtained with one gauging instrument when used several times by the same appraiser, this includes the ability of an automated tester (no real operator involvement) to repeat itself.
- Replication - Performing the same treatment combination more than once.
- Representative Sample - A sample, which accurately reflects a specific characteristic or set of characteristics within a population.
- Reproducibility - Variation in the average of the measurements made by different appraisers using the same measurement device, this can also include the agreement between several different measuring devices (not only appraisers).
- Resolution - The measure or degree of confounding. Higher resolution means less confounding or merely confounding main effects with higher order interactions.
- Response Surface Designs - A DOE that fully explores the process window and models the responses.
- Risk Priority Number - RPN equals severity value multiplied by the occurrence value multiplied by the detection value. RPN is used to prioritize recommended actions. Special consideration should be given to high severity ratings even if occurrence and detection are low.
- Root Cause Analysis - A systematic approach to finding the deep-rooted causes of a resulting output that is in need of improvement.
- RPN - Risk Priority Number equals severity value multiplied by the occurrence value multiplied by the detection value. RPN is used to prioritize recommended actions. Special consideration should be given to high severity ratings even if occurrence and detection are low.
- RTY - Rolled Throughput Yield is the probability that a unit will make it through all process steps defect free.
- Run Chart - Run Charts are basic data charts created with a line representing data used to track the performance over time. No relationships to process capability or control limits or input variables can be derived from simple run charts (see also Line Chart).
- Sample - One or more observations drawn from a larger collection of observations or universe (population).
- Scatter Plot - Is a type of diagram to present the relationship between two variables of a data set. On the scatter plot, a single observation is presented by a data point with its horizontal position equal to the value of one variable and its vertical position equal to the value of the other variable
- Screening Designs - A DOE that identifies which of many factors have a significant effect on the response.
- Secondary Metric - A measure of process output of secondary importance. Secondary measures are metrics needing to be maintained during project work to prevent process sub-optimization or to prevent the adverse effect of changes made to improve the primary metric.
- Significant Difference - The term used to describe the results of a statistical hypothesis test where a difference is too large to be reasonably attributed to chance.
- SMED - Single Minute Exchange of Dies. A series of techniques pioneered by Shigeo Shingo for changeovers of production machinery in less than ten minutes.
- SOP - Standard Operating Procedures (SOP) clearly define work instructions that are consistently applied to work operations.
- Spaghetti Map - A map of the path taken by a specific product as it travels down the value stream in a mass-production organization, so-called because the product method uses a continuous line to trace the path and distance.
- Special Cause - A source of variation which is non-random and outside of what is normally expected. A special cause is often signaled by a data point outside the upper or lower control limits, and/or a non-random pattern of the data within the control limits.
- Stable Process - A process which is free of special causes, e.g., in statistical control.
- Stakeholder - An individual that has a vested interest in the performance or improvement of a process or project.
- Statistical Control - A quantitative condition which describes a process that is free of special causes of variation. Such a condition is evidenced with the use of a control chart.
- Takt Time - Available time divided by demand. It's the tempo or pace required to exactly output demanded quantities given available production time.
- Time Series Plot - A display of measurement data on the y-axis versus time data on the x-axis.
- Total Productive Maintenance - Total productive maintenance is the effective management of business assets in order to produce the highest yield and utilization on a continuous basis. Elements include: maximize equipment effectiveness; establish a thorough system of pm for the equipment's life span; it is implemented by various departments; involves every single employee from top management to workers on the floor; based on the promotion of pm through autonomous small group activities.
- TPVR - Transactional process variability reduction or TPVR is an approach to reduce variation in our transactional processes in a way that dramatically reduces the variation in our transactional outputs. Transactional processes: customer service, job instructions, pre-press, invoicing, shipping, etc. TPVR increases process stability, consistency, and quality.
- Type I Error - The error in rejecting the null hypothesis when in fact it is true. Also the error in saying there is a difference when in fact there is no difference or declaring a defect when one does not exist.
- Type II Error - The error in failing to reject the null hypothesis when in fact it was false. Also the error in saying there is not a difference when in fact there is one or allowing defects to pass through to customers.
- UCL - Upper Control Limit is the upper bound of a statistical process control chart that is set by the specific calculation of that particular control chart. As a general rule, UCL's will be approximated as 3 sigma above the center line.
- USL - Upper Specification Limits are not to be confused with Upper Control Limits. USL's are the upper bounds of a customer’s specification. USL's are set by the quantifiable demands of a customer.
- VOC (Voice of the Customer) - Customer feedback both positive and negative including likes, dislikes, problems, and suggestions.
- XY Matrix - A simple spreadsheet used to relate and prioritize x's to y's through numerical ranking.
- 5 Whys - The practice of asking why 5 times whenever a problem is encountered, in order to identify the root cause of the problem so that effective countermeasures can be developed and implemented.
- 5s - Standards that make up the foundation that supports a clean and safe manufacturing environment. The 5s standards at are commonly known as sort, set in order, shine, standardize and sustain.
- 7 Deadly Muda - The original classification developed by Taiichi Ohno of the most common wastes in manufacturing. These are overproduction ahead of demand, waiting for the next processing step, unnecessary transport of materials, over processing of parts due to poor tool and product design, inventories more than the absolute minimum, unnecessary movement by employees during the course of their work and production of defective parts.
- 80/20 rule - Also known as the Pareto principle. 80% of the defects are derived from 20% of the problem. Named after Italian economist Vilfredo Pareto who found in his studies that 80% of the land was owned by only 20% of the population.