20. Design for Six Sigma (DFSS) – Six Sigma for Business Excellence: Approach, Tools and Applications

20

Design for Six Sigma (DFSS)

The Need for DFSS

The D-M-A-I-C roadmap was designed to detect problems in any existing, stable manufacturing process, service process or product design function. It requires a measurement system, statistical analysis tools, the ability to define a mean adjustment factor to put the response back on target and a control plan to audit the response performance against statistical control limits. This approach is primarily based on the application of statistical process control, quality tools and process capability analysis; it is not a design methodology. This DMAIC roadmap is, therefore, not appropriate for a new product design and development process; nor does it adequately address any improvement in terms of reliability.

There is a limit to which processes that can be improved using the DMAIC roadmap. Companies that have adapted the process Six Sigma have realized that they often reach this “limit” called a “five sigma wall” (Jui-Chin Jiang, Ming-Li Shiu, Mao-Hsiung Tu 2007). Further improvement can be achieved only by completely redesigning the products and processes. A few years ago, railway reservations in India could be made only by physically going to the railway station. I remember that reservation often meant having to spend about half day at the station till the 1980s. The system has been completely redesigned and has undergone many improvement cycles. Today, we can book a railway ticket, get a reservation and print the ticket in just about 5 minutes. This is possible even in a developing country like India. Thus, redesigning of systems gets stunning results that may not even have been thought to be possible without using the philosophy of improving the current process. We may call this re-engineering.

Another limitation of the D-M-A-I-C roadmap is that it does not adequately address reliability improvement. Reliability depends significantly on the design of the system. Improving reliability, therefore, requires changes in designs. It is interesting to note that most of the Six Sigma programs do not include any tools of reliability engineering.

I remember a few examples from my own experience. I worked as the head of product engineering, Cummins India Limited (CIL), during the period 2002 to 2005. We were trying to improve the durability of parts called camshafts. Initially we had thought of taking up a D-M-A-I-C project. After studying the various failure modes, we realized that durability improvement requires redesigning of camshaft profiles and related parts. Thus a DFSS project was taken up and completed successfully with excellent improvement in durability.

DFSS Roadmap Options

DFSS is used to conceive, design, optimize and certify the capability of a new product design or manufacturing process based on the market need or business strategy. It can also be used to improve a current manufacturing process or product design that has radically different performance requirements.

DFSS requires that

  1. various functions in an organization work simultaneously to design a product, service or a process. This involvement of functions from the beginning of a project is essential;
  2. a product, service and/or process is designed to minimize variability in CTQs; and
  3. a process is designed to be capable of delivering quality and quantity in a timely manner.

When a company wants to develop new technology, the recommended roadmap is I2DOV as given below:

  1. Invent and Innovate
  2. Develop
  3. Optimize
  4. Verify and Validate

(Creveling, Slutsky, and E. D. Antis, Jr. 2003)

A popular roadmap of a product design and development is CDOV:

  1. Concept
  2. Design
  3. Optimize
  4. Verify and Validate

A similar roadmap Identify-Design-Optimize-Validat (IDOV) is suggested by some experts (Woodford 2002). Another popular roadmap of the DFSS approach is (Gitlow, Levine and Popovich 2006) DMADV:

  1. Define
  2. Measure
  3. Analyze
  4. Design
  5. Verify/Validate

I personally feel that the above roadmap is somewhat force-fitted to sound like the D-M-A-I-C and, therefore, I prefer the I2DOV and CDOV roadmaps.

Technology DFSS (TDFSS) is used to develop and certify the capability of new manufacturing technologies that are required based on long range product family plans. An important aspect of this phase is innovation and invention with systematic efforts to optimize and convert these into a viable technology useful for business. A typical roadmap used is I2DOV.

During the product design and development phase, usually the existing technology is used. Thus the focus is on faster development of a reliable product at competitive costs for specific application.

DFSS addresses

  • the understanding and gathering of customer needs
  • engineering and
  • statistical methods to be applied during the product development process.

Typical tools used in DFSS are listed in Table 20.1.

 

Table 20.1 DFSS toolset in a DMADV roadmap

Voice of the Customer

Understanding the voice of the customer (VOC) (Cohen 2004) is the starting point of every design. The voice of the customer can be gathered using the following techniques:

  1. Customer interviews
  2. Customer complaints data

Customer Interviews

For customer interviews, develop open-ended questions so that customers are able to express their requirements to the maximum practicable extent. The outcome of customer interviews is an unstructured set of phrases representing customer wants and needs. These will include a mixture of their true needs, product features that they like and don't like, their preferences, complaints, suggestions and target indicating comments. Customer interviews can be carried out using three approaches:

  1. Focus groups
  2. One-on-one interviews
  3. Contextual Inquiries

(a) Focus Group Interviews     have their advantages and disadvantages. The advantages are:

  • Synergy: Any comment from a group member usually triggers a discussion on the need in question and provides useful inputs.
  • Savings in terms of time and cost: Many customers are interviewed in the time it would otherwise take to interview only one customer.

Disadvantages of the focus group interviews could be any of the following:

  • The dominant members can influence other group members.
  • Some specific customer needs may not surface as the group members may not like to share certain information given the possibility that other customers could be their competitors.

(b) One-on-One Interviews     are the most common form of customer interviews. These usually take place in a conference room or the office. In order to maximize the benefit of customer interview it is necessary to prepare well, keeping in mind the objective. The interview may be recorded. However, some customers may not open out under recorded interviews. On the other hand, taking notes could result in loss of some important observation or inputs.

(c) Contextual Inquiries     are conducted as a special case of one-on-one customer interview in the context of the customer's activities/usage. Such an inquiry can surface “hard to find” needs of the customer. This is because the product developer can observe the customer in the context of the latter's activities. A contextual inquiry is usually more effective than conventional interviews.

We should then sort the VOC into major categories including

  • needs/benefits
  • product characteristics
  • reliability requirements
  • others

Gather Complaints Data

This indicates dissatisfaction with current products or services. We can take a random sample of complaints from the database. This will include “ignored” complaints which could be about the misuse of products, rejected complaints as the product is beyond warranty, etc. These are useful for new product introduction teams.

A VOC Table

It is necessary to organize the customer input in the form of a VOC table. We can separate out customer needs, functions, reliability, targets, and product characteristics. Table 20.2 shows a simple example of a VOC table for mobile phone.

 

Table 20.2 An example of a VOC table for a mobile phone

Grouping Customer Needs Using K. J. Diagrams

It is convenient to structure the customers’ needs in K. J. Diagrams. KJ stands for Kawakita Jiro. The technique was invented by Jiro Kawakita—a noted Japanese anthropologist—as a means of summarizing and characterizing large quantities of anthropological data gathered during his expeditions. After recognizing the potential application of his technique in hypothesis formulation, Kawakita refined the process and developed it to what is today known as the K. J. method. This is a special form of “affinity diagrams”. Figure 20.1 shows an overview of the K. J. process. Raw needs are listed on post-it slips and the team members group these applying some logic. This is then refined further. It is most common to associate weights to various needs. These could be represented by, say, number 5 for the highest weightage and number 1 for the lowest weightage.

 

 

Figure 20.1 Using the KJ method to group customer needs

 

The needs can be grouped in primary, secondary and in some cases tertiary levels. The primary level is actually the phrases and statements used by the customer. The needs at the appropriate level are then transferred to the quality function deployment (QFD) matrix. Once the needs are grouped using the K. J. method, customer surveys are conducted to assess the priority of these. It is essential that the needs are first understood before designing the survey questionnaire.

The survey will typically include questions related to

  1. importance ratings
  2. performance of our current products
  3. performance of competitors’ products

Importance ratings can be done using absolute, relative or ordinal ratings. Each one has its own benefits. In absolute rating, customers tend to give high importance ratings for all needs. Thus relative methods are sometimes preferred over absolute ratings.

Quality Function Deployment

The customer needs identified and ranked earlier can now be organized using quality function deployment (QFD). The QFD is a method for structured product planning and development that enables a development team to specify clearly the customers’ wants and needs and then to evaluate each proposed product or service capability systematically in terms of its impact on meeting the specifications (Cohen 2004).

In QFD, one or more matrices with relationship between customers’ wants and design characteristics are developed. The tool provides a conceptual map for communication across functions. It aims at focusing on customer needs throughout the development of any product. It is mainly developing a house of quality (HoQ) as shown in Figure 20.2. The QFD technique was evolved in Japan in ship-building industries in the late 1960s to support the product design process. It has been extensively used in a range of industries. It is also being used globally since the 1970s.

Customer wants and needs: These are listed with priority in the extreme left column of this matrix (What's). The list is typically secondary level customer needs after grouping the VOC using the K. J. process. Intuition, judgment and empathy are of great value in clustering the phrases.

 

 

Figure 20.2 Elements of QFD (House of Quality)

 

The planning matrix: This is usually the second section completed. The matrix provides quantitative market data for each of the customer attributes. This indicates the relative importance of the needs and benefits to the customer. Values can be based on user research, competitive analysis or team assessment. Based on market research, the planning matrix usually consists of information such as

  • how important is the need to the customer.
  • how well do “our” current products meet the needs of the customer
  • how well do the competitors’ products meet the needs of the customer

This information is important in assessing the potential for improving “our” current products.

Technical response: It is a set of product or process requirements stated in the organization's own language. Process/engineering characteristics which will achieve these requirements (How's) are tabulated as column headings. Sometimes these are called corporate expectations. It can contain one of the following alternatives:

  • Top level solution-independent metrics or measurements
  • Product or service requirements
  • Product or service features or capabilities.

These responses are sometimes called substitute quality characteristics’ (SQCs). SQCs represent the “voice of the developer”.

Relationship matrix: The relationships between the rows (What's) and columns (How's) are rated and entered in these cells. The relationship is expressed typically in terms of strong, moderate, weak or no in the matrix. Traditional symbols used in QFD are:

 

   for strong relationship

   for moderate relationship

Δ   for weak relationship

 

Some practitioners prefer numbers instead of symbols. The numbers used by most practitioners are 9 for strong, 3 for moderate, 1 for weak and 0 for no relation.

Technical matrix: This section is used by the design teams to set competitive benchmarks and targets. The language is the same as that used for technical responses or SQCs.

Technical correlations: The correlations of characteristics, whether mutually supporting or contradictory, are recorded. These are useful for identifying and addressing the conflicting SQCs.

Using the QFD, the voice of customer is effectively carried from the concept design through to the manufacturing stage. QFD, however, has its own limitations. It demands a cross-functional team, including marketing, technical and production representation. It can be exceedingly complex and time-consuming, sometimes tedious. A numerical answer can be treated as a “right” answer. Also, it requires some training and strong facilitation initially.

Example: QFD for a Pressure Cooker

Figure 20.3 shows a partially completed house of quality for a pressure cooker.

 

 

Figure 20.3 An example of the QFD of a pressure cooker using SigmaXL template

 

 

Figure 20.4 QFD from the system to the process

QFD consists of multiple matrices, linking technical response from one level to the next. See Figure 20.4 for an illustration.

The Kano Model

A very useful model was developed by a Japanese TQM consultant Noriaki Kano (Cohen 2004). The Kano model relates customer satisfaction with product characteristics. Product characteristics are features or capabilities of a product.

The Kano model (see Figure 20.5) divides product characteristics into the following categories:

  1. Dissatisfiers

  2. Satisfiers

  3. Delighters

The X axis represents the actual performance and the Y axis represents the extent of satisfaction.

 

 

Figure 20.5 The Kano model

 

A dissatisfier is a product characteristic that the customer takes for granted, but that causes dissatisfaction when it is missing. Customers expect these qualities in the product, but the presence of these does not result in satisfaction. Dissatisfiers can be termed as expected quality. Some examples of dissatisfiers are leakage in a product, user manual missing, tool kits missing, scratches in the paint, broken parts, no remote control for a TV set, government regulations not being followed, etc. Customer complaint databases are an important source for dissatisfiers in our current products. Our best possible actual performance in reducing dissatisfiers can only raise customer satisfaction to a “not dissatisfied” level.

A satisfier is something that customers want in a product and generally ask for. Satisfiers are sometimes known as the desired quality. Examples of satisfiers for a car could be smooth run on the road, good fuel consumption, easy loading of goods, good visibility for the driver, better pick-up, etc. Satisfiers are usually present in competitive products. They are relatively easy to measure and, therefore, benchmark. Information about satisfiers is gathered during customer interviews and surveys.

A delighter is an attribute or a feature that pleasantly surprises the customer. Delighters satisfy the hidden needs that have not been fulfilled by the existing products. Absence of delighters, by definition, would not result in dissatisfaction. Customers would not know what they are missing. Customers never specify delighters. Delighters can be called unexpected quality. Examples of delighters are not easy to find, but some of these could be 3M post-it, walkman, Windows, railway reservation in India from any station to any other station, etc.

Over a period of time, the “unexpected” quality becomes expected and satisfiers are taken for granted and become expected. QFD is a tool to plan for and manage these different qualities. Delighters can create new markets or market segments or can offer a temporary competitive advantage.

The Kano model leads to the following strategy:

  • Must have satisfiers

  • Cannot afford dissatisfiers

  • Must provide more satisfiers than the competitors do

  • Continuously search for new delighters for a competitive advantage,

This process of capturing the VOC and translating it into product specifications is generic for product development. In many Six Sigma projects, customers may be internal. In such cases, the subsequent steps of translating the VOC into product specifications may not apply.

Critical Parameter Management (CPM)

One of the underlying principles of DFSS is critical parameter management. CPM is the philosophy of giving the topmost priority to system performance. Subsystem and component performance is important but only within the context of system optimization. These are resources or means to achieve the system requirements and goals. CPM gathers, integrates and documents the network of critical performance relationships right from the VOC to the components and material requirements. CPM can be considered as a philosophy of identifying the few vital functional responses which are critical for satisfying the customer needs as well as for tracking our performance for these critical functional responses (Creveling, Slutsky and Antis Jr. 2003).

The ultimate goals of CPM are:

  1. reduced time to develop and market

  2. high quality

  3. low cost

Directly measuring quality, reliability, cost and cycle time before starting production is not easy. These are lagging indicators of performance. It can be too late to take action by the time we get data on these measures—measures that tend to foster the “build-test-fix” mentality.

CPM measures functions that are directly related to the laws of physics selected in the product and technology. Thus we must measure functions, not quality (e.g. ppm) and use principles of physical laws. CPM metrology is governed by engineering sciences with some assistance from quality. For example, while developing a pressure cooker, if customer requirements are fast cooking and retention of nutritional value, then in the case of the former we can measure physical parameters such as pressure, opening of the safety valve, etc. For estimating nutritional value, help from a biochemist will be required.

A parameter can be

  • a measurable functional response that is either continuous or discrete variable (continuous is preferred),

  • a controllable factor that contributes to a measurable functional response, or

  • an uncontrollable factor that contributes to a measurable response.

Measurable functional response is defined as a dependent variable associated with a product or manufacturing assembly, service or maintenance process. Controllable or uncontrollable factors are independent variables.

CPM is systematically focusing attention to a design's functions, parameters and responses that are critical to the customer. Thus we must

  • understand the critical customer needs,

  • identify the functional responses that strongly impact the satisfaction of these needs,

  • identify the parameters—controllable and uncontrollable—that impact these functional responses, and

  • develop specifications to maximize satisfaction for the critical needs.

Critical Parameter and Capability Flow Down

In QFD, the new product introduction (NPI) teams identify the characteristics critical to quality (CTQ), critical functional responses (CFR) and their relationship.

Let us consider the previous example of pressure cooker. Cooking time is a CTQ in this case. The cooking time is controlled by temperature inside the cooker. Cooking temperature is a function of pressure. The pressure is, therefore, a critical adjustment factor (CAF). To control the pressure, the quality of control valve and the sealing system must be assured. If there is a leakage at the valve or the rubber seal, pressure will drop and cooking time will be adversely affected. Time also depends on the material and thickness of the vessel. This is an adjustment factor. Similarly, “the cooker should be safe to handle” is a CTQ. Safety depends on the reliability of the safety valve operation. See Figure 20.6 for an illustration of the flow down.

Capability Growth Index (CGI)

Capability index Cp indicates the ability of the product or process to meet customer responses for individual critical functional responses (CFRs) e.g., fuel efficiency of a car, pressure inside a cooker, delivery of a lubricating oil pump, time to serve food in a restaurant, time for checking in a hotel, temperature of food served, clarity of a television, talk time between battery charging in a mobile phone, etc. The concept of process capability index was discussed in Chapter 7. You may recall that:

 

 

 

Figure 20.6 Critical parameters and capability flow down for a pressure cooker

 

Systems and subsystems can have more than one CFR; for example, delivery, pressure, alignment, backlash, etc. Customer satisfaction depends on the composite of all CFRs. For a Six Sigma company, the target Cp for each CFR is 2.

Capability growth index (CGI) is defined as:

 

 

where Cp1, Cp2…are capability indices of individual CFRs and n is the number of CFRs.

Our target Cp = 2. Thus target Cp/2 = 1. If (Cp/2) > 1, it is capped (truncated) to 1. Thus there is no extra credit for exceeding the Cp for any single CFR. The value of CGI is converted into percentage. CGI is a strong indicator of how well the product (or service) meets the customer expectations.

Application Example

Oil delivery, end play and backlash are three critical functional responses for a lubricating oil pump. Mean and standard deviations are estimated from pilot lots. Table 20.3 shows Cp and CGI calculations. Specifications, mean, standard deviation and the calculation of CGI is illustrated in the table.

CGI should be tracked throughout a DFSS project. Figure 20.7 is an illustration of the CGI tracking over DFSS phases.

 

Table 20.3 CGI calculation for the example of the oil pump

 

 

Figure 20.7 Tracking of CGI for improvement over DFSS phases

Design for X (DFX)

While developing new designs, it is essential to consider business priorities and engineering, manufacturing and sourcing strategies. DFX refers to design for X. X can be reliability (DFR), cost (DFC), manufacturing (DFM), assembly (DFA), testability (DFT), and serviceability (DFS). The following paragraphs briefly explain some of these.

Design for Manufacturing (DFM)

DFM is mainly concerned with materials and manufacturing processes. Some general principles of DFM are listed below:

  • Involve the right people from the manufacturing and supply-chain during concept generation, selection and development.
  • Avoid exotic materials and processes unless really needed.
  • Listen to the inputs of manufacturing and supplier representatives to make the designs ‘producible’.

Some examples of DFM are

  • Castings: Drafts, shrinkage allowances, machining allowances, initial pick-up points, provision for locating and holding in fixtures. Avoid sudden changes in sections, consideration for foundry technology (green sand, shell, die cast, etc.), machinability.
  • Forgings: Die removal, grain flow, machining allowances, initial pick-up points, provision for locating and holding in fixtures. Avoid sudden changes in sections, consideration for forging technology, machinability, etc.
  • Machining: Tolerances considering capabilities of processes (without affecting functions and fits!), standard tooling availability (such as in case of gear machining), ease of tooling and holding.
  • Heat treatment: Hardenability of materials, avoiding thin sections, can lead to cracking at fillet radii.

Design for Assembly (DFA)

DFA mainly addresses the features and characteristics that improve the ease of assembly. Some general principles of DFA are

  • reduction in complexity
  • Minimizing the number of parts or fasteners
  • standardizing parts preferably across families or models
  • Minimizing the number of assembly motions and tools required
  • Selecting fasteners that are standard, and have easy identification
  • Ensuring enough space for using assembly tools and spanners
  • Facilitating easy measurements
  • Mistake proofing for fitment of similar parts
  • Providing easy identification

Design for Cost (DFC)

Designs quite often have a significant impact on product costs. In highly competitive markets, cost can determine the business case for a product. It is said that 70 percent of the life cycle cost is determined at the design stage. Thus, the designer must consider DFC as an important aspect of a new product development.

Some useful tools for DFC are as follows:

  • Value analysis/function analysis: Designers must understand which features are important to the customers and which are not. Thus, a review of QFD is an essential part of DFC.
  • Target costing: Designers work towards meeting a target cost. The recent example of Nano from Tata Motors is a classic lesson for designers. Another example is the Mini laptop computers at prices of Rs. 15,000. Only the essential functions which add value for the customers are retained in the product.
  • Activity based costing: Instead of allocating overheads to products on an arbitrary basis, activity based costing (ABC) assigns overheads on the basis of the activities which cause these costs to occur. By focusing on activities which consume resources, ABC can reveal more useful information for product costing. This way, value-adding activities are separated from non-value-adding activities.

Design for Reliability (DFR)

Reliability is the probability that a product will perform its intended function for a given period of time under stated conditions. DFR assures that reliability is built into the design. DFR can be achieved broadly by

  • knowledge-based engineering
  • designing products to bear applied load or transfer heat
  • variation reduction and control
  • adequately accounting for sources and effects of variation that can affect functional performance
  • robust designing
  • the use of compensation technique to correct functional performance due to wear, for example automatic belt tension adjustment
  • redundancy (another popular strategy)
  • noise isolation (or suppression): e.g., thermal insulation, seals on bearings, filters
  • the use of simpler designs as they tend to be more reliable
  • avoiding complexity that reduces reliability
  • giving preference to proven technology
  • the use of computing power and analysis to predict and improve reliability
  • the use of design FMEA to prepare design verification and validation plans
  • stress strength analysis
  • the test to fail (rather than test for success)
  • the use of “reliability growth” models to track reliability and verify performance against goals.

Design for Maintainability or Serviceability (DFS)

DFS has a strong relationship with design for assembly. Some underlying principles are

  • low fastener count
  • low tool count
  • predictable maintenance schedules
  • one-step service functions
  • extension of maintenance intervals as predictable failures are less expensive than random failures. For example, filter change or oil change.
  • providing diagnostics and monitoring facilities.

Design for Testability (DFT)

The principle of DFT is to facilitate easy measurement of the program's ability to perform. Variable data acquisition system must be included as an integral part of the design. Continuous variable measurements are preferred over attribute data.

 

 

Figure 20.8 Taguchi's loss function

Robust Design

The concept of robust design was introduced by Dr Genichi Taguchi. Taguchi introduced a new dimension to the definition of quality: “Quality is a loss to the society after the product is shipped.” He defined the loss function to quantify the loss. This loss function depends on the type of CTQ. An illustration of the loss function for a CTQ where “nominal is the best” is shown in Figure 20.8.

Loss L is given by

 

 

where T is the target which is the most desirable, y is the actual value and k is a constant.

Application Example

A company manufactures an engine shaft with the specification of 25 +/- 1 mm. The cost of scrapping is Rs. 100.

A shaft will get scrapped if the size is below the lower limit. Evaluate the loss.

At the lower limit, y = 24 and the loss is Rs. 100.

Thus, 100 = k(24 − 25)2 or k = 100.

Thus, the loss function L = 100(y − 25)2.

A robust design means immunizing a design to variation in noise factors. A simple example of a robust design is earthquake resistant buildings in Japan. After going through major destruction in the past, Japanese engineers have nearly perfected the technology to design and construct buildings which can survive severe earthquakes. According to Madhav Phadke, one of the first few professionals to apply the Taguchi philosophy in the US (Phadke 1989),

  1. Design directly influences more than 70 percent of the product life cycle cost.
  2. Companies with high product development effectiveness have earnings three times the average earnings.
  3. Robust design method is central to improving engineering productivity.

Taguchi suggests a three-step approach for a robust design (Taguchi 2005):

  1. System design: New concepts, ideas and methods are generated to provide new and better products.
  2. Parameter design: Designing optimum settings of control factors to assure uniform and robust products. DOE is commonly used for optimization.
  3. Tolerance design: Quality is improved by fine-tuning and/or tightening tolerances at minimum cost.

System design is primarily a result of engineering/technical knowledge and innovation. Initial step in parameter design is to develop a parameter diagram or P-diagram. This is quite similar to a process map. Figure 20.9 shows a generic p-diagram.

 

 

Figure 20.9 Illustration of a P-diagram

 

Taguchi categorized various factors into three types:

  1. Control factors which the designer can choose. Examples are material, size, length, resistance, etc.
  2. Noise factors on which the designer does not have any control. Examples are voltage fluctuations for electronic systems, environmental changes, road conditions for automobiles, manufacturing errors, etc.
  3. Signal factors which can be used to adjust the response of the system. For example, the accelerator paddle in a car is used to control the speed of the car.

Noise can be external or internal. Internal noise is due to part-to-part variation.

The robust design approach aims at minimizing the effect of noise on the performance and reliability of the product. In the robust DOE, we are interested in the mean and variation. Taguchi introduced a figure of merit called the signal to noise ratio (SN ratio).

Our objective is to maximize the SN ratio. Depending upon the type of CTQ, SN ratios can be estimated using the following formulae:

 

 

where n is the number of responses in the factor level combination, s is standard deviation of the given factor level combination. Taguchi suggests that maximizing the SN ratios improves the robustness of design.

DOE for Robust Design

The objective of robust design is to make a product less sensitive to the noise factors. Taguchi recommends orthogonal arrays for designed experiments. There are two types of Taguchi designs:

  • Passive designs: Noise factors are not forced into this design.
  • Dynamic designs: Noise factors are forced into this design.

Table 20.4 is an illustration of a passive design. There are three replicates of the design where y1, y2 and y3 being the responses in the three replicates. The effect of noise factors can be estimated by the variation in each row, i.e., for each treatment. Robustness of each treatment can be measured by calculating the SN ratio for each row.

In the dynamic design, noise factors are forced into the design. Table 20.5 shows an illustration of a dynamic design with two noise factors, N1 and N2, each having two levels. The inner array 24 –1 is a resolution 4 design. The outer array is 22 design with two noise factors. This outer array determines the number of replicates for the inner array.

 

Table 20.4 Illustration of a passive design

 

Table 20.5 Illustration of a dynamic design

 

Application Example of a Taguchi DOE (Passive Design)

A designer wants to maximize the fatigue strength of a shaft. There are three factors which are expected to affect this strength. These factors and their levels selected for experimentation are:

  1. Fillet radius            Low level = 6 mm         high level = 8 mm

  2. Surface roughness  Low level = 0.2 micron  high level = 0.8 micron

  3. Case depth             Low level = 1 mm         high level = 2 mm

Table 20.6 shows the factor levels, response for the three replicates and the SN ratios for fatigue strength which is higher for better response.

 

Table 20.6 Data and calculations of an example of a Taguchi DOE

 

The SN ratio is maximum for radius = 8, roughness 0.2 and case depth = 2 for maximum strength. This is the most robust treatment. The design can be analyzed considering the SN ratio as response.

The Taguchi designs, though popular, have been criticized for the large number of runs required. Some statisticians recommend the classical approach for robust design.

Classical Approach to Robust Design of Experiments

In this approach, standard deviation (SD) of replicates under each treatment is estimated. As SD is not normally distributed, a transformed function, ln(SD), is used in analysis. Ln(SD) is treated like a response in the experiment, and the effects of factors are analyzed like in any response.

This is illustrated with the same example of maximizing fatigue strength. Designed experiment and data of fatigue strength is available in worksheet DOE FATIGUE 3 REPLICATIONS.mtw. Columns C1, C2, C3 and C4 in Table 20.7 show the partial data. Columns C5 to C10 are created later during processing of data.

This is first analyzed like any DOE. The main effect and interaction plots are shown in Figure 20.10.

 

Table 20.7 Partial data from the file DOE FATIGUE 3 REPLICATIONS in columns C1 to C4 and the standard deviation calculated by Minitab and stored in column SD

 

 

Figure 20.10 Main effect and interaction plots

 

We now use a special feature in Minitab software for robust DOE.

First “pre-process the data” to calculate standard deviations of repeat treatment combinations using the commands > Stat > DOE > Factorial > Pre-process Responses for Analyze variability.

In the dialogue box, as shown in Figure 20.11, select ‘compute for replicates in each response column’, select strength as response, enter SD in “Store Std. Dev in” column and enter “count” in “Store Counts in”.

With this command, standard deviations are calculated for all eight treatments and stored in the column SD as shown in Table 20.7.

 

 

Figure 20.11 Minitab dialogue box for pre-processing data

 

 

Figure 20.12 Pareto chart for effects seen before and after reducing the model with ln(SD) as response.

 

Now analyze the experiment for ln(SD) as response. The command is > Stat > DOE > Factorial > Analyze variability using default choices.

None of the factors and interactions are significant (see pareto chart in Figure 20.12 (a). However, degrees of freedom are only 7. Omit ABC and AC which is the interaction with a very small effect. Refer to Figure 20.12 (b) for pareto chart of effects seen after reducing the model. Factor A and interaction AB are now significant at 95 percent confidence level. Main effects and interaction plots for SD are shown in Figure 20.13.

 

 

Figure 20.13 Main effect and interaction plots for standard deviation

 

Looking at the main effects and interaction plots, shown in Figure 20.13, we can see that interactions between radius-roughness and radius-case are large.

Now use the response optimizer to maximize strength (minimum 7 and target 9) and minimize standard deviation (target 0.025 and maximum as 0.35).

Results of response optimizations are shown in Figure 20.14.

The most robust settings are a radius of 7.93, roughness as 0.2 and case depth as 2.0.

 

 

Figure 20.14 Response optimization for fatigue strength and standard deviation

 

Thus, we can get a good direction for optimizing the parameters for robust design: maximizing strength and minimizing variation.

Types of Control Factors

Table 20.8 lists the various possible main effect plots for mean and standard deviation as well as provides guidance for a robust design.

 

Table 20.8 Types of factors and application guidance

 

Non-linearity affects variation. To understand this, refer to Figure 20.15. It shows two different possible response patterns to explain the sensitivity of response Y to factor X. This figure shows that in some situations, we can select the factor setting to minimize the variation in response. Due to non-linearity, change in settings of factor B causes change in mean as well as in variation. Change in settings of factor A only affects change in mean but has no effect on variation in Y as the response is linear.

 

 

Figure 20.15 Effect of non-linearity on mean and standard deviation

Statistical Tolerancing

In DFSS, while evolving the design, tolerances have a significant impact on functionality, ease of manufacturing, and cost and reliability of design. Most design applications require more than one part, for example, pump, transmission, ball bearings, etc. Each part in the assembly has tolerances and manufacturing variation. Capability Index Cp gives us the ratio of tolerance and 6σ variation. Tolerances need not be physical dimensions. These can refer to other physical properties as well such as resistance, capacitance, chemical percentage, etc.

Various approaches are used to analyze tolerances. Some of these are

  • worst case analysis (WCA)
  • statistical tolerance stack-up or root sum squared (RSS)
  • Monte Carlo simulation

Worst case analysis is simply evaluating the impact of parts at their minimum and maximum condition on the functionality of the design. However, this method results in tighter tolerances with higher manufacturing costs.

Statistical approach to tolerance stack-up uses the property of “additivity of variances”. Variances can be added to find the variation of the assembly. Thus

 

 

This is further explained in Figure 20.16.

 

 

Figure 20.16 Additivity of variances for an assembly

 

Application Example

An assembly consists of two parts. Tolerances of part A and B are 25 ± 0.25 and 10 ± 0.1 respectively. Assume that each dimension is normally distributed and each tolerance equals 6 standard deviations. Estimate the tolerance of assembly using the RSS method. These two parts should fit in part C with mean 35.25 ± 0.1. What proportion of assemblies are likely to be right the first time? What is the sigma level?

Tolerance of assembly (T) can be calculated from tolerances of the individual parts T1 and T2. Figure 20.17 shows the assembly and calculations. Standard deviation of each dimension is calculated by dividing the total tolerance by 6.

 

 

Figure 20.17 Assembly and calculations

 

To calculate right the first time, estimate the proportion of assemblies with gap < 0:

The gap is estimated to have mean at 0.25 with σ of 0.0957. Please note that the gap is calculated as C – (A + B) but variance of gap is calculated by addition of variances of (A + B) and C i.e. (0.0081 + 0.0011) = 0.0092. Standard deviations are calculated by taking square root of variances. Use normal distribution tables or the Excel function NORMDIST(0,0.25,0.0957,1) to find the area below 0. The area to the left of 0 can be easily estimated as 0.0044. Thus it can be concluded that about 0.44 percent assemblies will get rejected, and 99.56 percent assemblies will be right the first time. Using the sigma calculator or referring to tables, the sigma level corresponds to 4.12.

 

Limitations of the RSS Approach

In real life, each tolerance may not equal 6σ. Each characteristic may not be normally distributed. What should we do if assumptions are not valid?

If we have no idea about the type of variation, we need to use the “worst case analysis”. This is simple arithmetic without using any statistics. If we know something about the distributions and standard deviations, the Monte Carlo simulation is possible. Monte Carlo simulation requires the use of software such as Crystal Ball or @Risk. A free software application Simular is also available on website http://www.simularsoft.com.ar/.

Tolerance Design

Taguchi's tolerance design methodology aims at identifying the impact of various tolerances on the function. Using these, parameters with the minimum cost impact are selected for optimum performance at the lowest cost. Sensitivity analysis can be used for this. We can use tolerance design when the transfer function y = f(x1, x2‥) is known. Software program like Crystal Ball or Simular make it easy to simulate and perform sensitivity analysis. In case the transfer function is not known, use DOE or regression to develop the function. DOE can be used for tolerance design when simulation is not possible. This can be done by using a designed experiment and performing ANOVA to evaluate the effects of various tolerances (Barker 1990).

The Theory of Inventive Problem Solving (TRIZ) and DFSS

Technology DFSS requires creativity in evolving new concepts and products. One of the useful approaches to technology development is TRIZ. ‘TIPS’ is the acronym for Theory of Inventive Problem Solving, and ‘TRIZ’ is the acronym for the same phrase in Russian. QFD helps us in finding out what to develop. However, it does not tell us how to develop it. Genrich S. Altshuller, the Russian inventor of scuba diving, classified more than 2 million patents based on their level of inventiveness and came out with the technique called TRIZ. Altshuller worked in Russian Navy as a patents expert to help inventors. He screened more than 2,00,000 patents and concluded that the evolution of technical products and solutions is not a random process but is governed by certain laws. These laws can be used to systematically develop technical solutions (www.aitriz.org 2010).

Many large and small companies are using TRIZ at various levels to solve real, practical everyday problems and to develop strategies for the future of technology. TRIZ is in use at Ford, Motorola, Procter & Gamble, Eli Lilly, Jet Propulsion Laboratories, 3M, Siemens, Phillips, LG, and many more companies.

TRIZ can be an important part of TDFSS.

Summary
  • DFSS is a philosophy of developing products designed for high level of customer satisfaction and reliability with minimum cost and minimum cycle time.
  • Different roadmaps may be followed
    • TDFSS: I2DOV or DMADV
    • Product development: DMADV or CDOV
  • Some key principles of DFSS
    • Critical parameter management
    • DFX: DFM, DFA, DFR, DFC
    • Robust design
  • Special tools such as TRIZ can be effective in technology DFSS.
References

Barker, Thomas B. (1990). Engineering Quality by Design: Interpreting the Taguchi Approach. New York, NY: Marcel Dekker Inc.

Cohen, Lou (2004). Quality Function Deployment. Singapore: Pearson Education.

Creveling, C. M., J. L. Slutsky, and E. D. Antis, Jr. (2003). Design for Six Sigma in Technology and Product Development. New Delhi: Pearson Education.

Gitlow, Howard S., David M. Levine and Edward A. Popovich (2006). Design for Six Sigma for Green Belts and Champions: Applications for Service Operations–Foundations, Tools, DMADV, Cases, and Certification. New Delhi: Pearson Education.

Jiang, Jui-Chin, Ming-Li Shiu, and Mao-Hsiung Tu (2007). DFX and DFSS: How QFD Integrates Them. Quality Progress Journal October 2007 Issue, Milwaukee, WI: American Society for Quality.

Phadke, Madhav S. (1989). Quality Engineering Using Robust Design. Upper Saddle River, NJ: Prentice Hall Inc.

Ross, Phillip J. (1988). Taguchi Techniques for Quality Engineering. New York, NY: McGraw-Gill Book Company.

Taguchi, Genichi (2005). Taguchi's Quality Handbook. Hoboken, NJ: John Wiley and Sons. http://www.aitriz.org/. Accessed on 8 January 2010. D. Woodford, Design for Six Sigma-IDOV Methodology. http://www.isixsigma.com/library/content/c020819a.asp. 2002.