The concept of statistical quality methods. The concept of statistical methods, application features

The ISO standard states that the correct application of statistical methods is essential for controlling actions in market analysis, for product design, for predicting durability and service life, for studying process controls, for defining quality levels in sampling plans, for evaluating performance. to improve the quality of processes in safety assessment and risk analysis.

Using statistical methods, it is possible to identify quality problems in a timely manner (detect process disturbances before defective products are released). To a large extent, statistical methods make it possible to establish the reasons for the violation.

The need for statistical methods arises, first of all, in connection with the need to minimize the variability (variability) of processes.

Variability is understood as the deviation of various facts from the given values. Variability not detected in a timely manner can pose a fatal hazard, both for production and for products and the enterprise as a whole.

Systems approach to a decision-making procedure based on the theory of variability is called statistical thinking. As defined by American society, the quality of statistical thinking is based on three fundamental principles:

1) any work is carried out in a system of interrelated processes;

2) there are variations in all processes;

3) understanding and reducing variation is the key to success.

Deming said, "If I had to put my management message in just a few words, I would say the whole point is to reduce variation."

The reasons for the variation of any processes can be divided into two groups.

The first group is the general reasons associated with the production system (equipment, buildings, raw materials, personnel) correspond to variability cannot be changed without changing the system. Any actions of ordinary employees - executors in this situation, most likely, only worsens the situation. Intervention in the system almost always requires action from management - top management.

The second group - these are special reasons associated with operator errors, setup failures, mode violations, etc. The elimination of these reasons is dealt with by the personnel directly involved in the process. These are not accidental reasons - tool wear, loosening of fasteners, a change in the temperature of the coolant, a violation of the technological regime. Such reasons should be investigated and can be eliminated by adjusting the process, which ensures its stability.

The main functions of statistical methods in the CM

Cognitive information function

Predictive function

Evaluation function

Analytical function

False and undeclared alarm

In this case, we are talking about statistical errors. Where, as a result of their occurrence, a false alarm can be blamed and, on the contrary, not detecting these errors can translate into an undeclared alarm.

In general, observation errors are discrepancies between statistical observation and the actual values ​​of the studied quantities.

when conducting statistical observations, two types of error are distinguished

1) registration errors

2) errors of representativeness

Registration errors - occur due to incorrect establishment of facts in the process of observation, or their erroneous recording, or both.

Registration errors can be accidental and systematic, deliberate and unintentional.

Random errors are those errors that occur under the influence of random factors.

Such errors can be directed both towards exaggeration and towards understatement, and with a sufficiently large number of observations, these errors are mutually canceled out under the action of the law of large numbers.

Systematic errors - arise for certain permanent reasons acting in the same direction, i.e. in the direction of exaggerating or underestimating the size of the data, which leads to serious distortions of the overall results statistical observation.

Intentional errors are errors caused by deliberate corruption of data.

Unintentional errors are errors that are accidental, unintentional, for example, faulty measuring instruments.

Representativeness errors - such errors occur during non-continuous observation. They, like registration errors, can be random and systematic.

Random errors of representativeness arise due to the fact that the sample set of units of observation selected on the basis of the principle of randomness does not reflect the entire population, the magnitude of this error can be estimated.

Systematic errors arise due to violation of the principle of randomness in the selection of units of the studied population, which should be subjected to observation.

The size of these errors, as a rule, cannot be quantified. The validation of the statistical observation data can be realized through the exercise of control.

Classification of deviations of product quality parameters and control methods

Depending on the source and method of obtaining information, quality assessment methods are classified into objective, heuristic, statistical and combined (mixed). Objective methods divided into measuring, registration, calculated and trial operation. Heuristic methods include organoleptic, expert and sociological methods.

The use of statistical methods is one of the most effective ways to develop new technologies and control the quality of processes.

Question 2. Reliability of systems. Evaluation of the probability of failures and the probability of failure-free operation of the system for various connection schemes of the elements included in it.

System reliability

The reliability of the system is the property of an object to keep in time within the established limits the values ​​of all parameters characterizing the ability to perform the required functions in the specified modes and conditions of use, Maintenance, repairs, storage and transportation.

The reliability indicator quantitatively characterizes one or more properties that make up the reliability of an object.

The reliability indicator can be dimensioned (for example, mean time between failures) or not (for example, the probability of failure-free operation).

Reliability indicators can be single and complex. Unit reliability indicator characterizes one of the properties, a complex - multiple properties constituting the reliability of the object.

The following reliability indicators are distinguished:

Serviceability

Operability

Reliability

Durability

Maintainability

Recoverability

Persistence, etc.

Reasons for making unreliable products:

1) lack of regular verification of compliance with standards;

2) errors in the use of materials and incorrect control of materials during production;

3) incorrect accounting and reporting of control, including information on technology improvements;

4) substandard sampling schemes;

5) lack of tests of materials for their compliance;

6) failure to comply with acceptance test standards;

7) lack of guidance materials and instructions for the control;

8) non-regular use of control reports to improve the technological process.

Evaluation of the probability of failures and the probability of failure-free operation of any system depends on the connection diagram of the elements included in it.

There are three connection schemes:

1) serial connection of elements


A sequential system of connecting elements is reliable when all the elements are reliable and the greater the number of elements in the system, the lower its reliability.

The reliability of series-connected elements can be found by the formula:

(1)

where p is the degree of reliability of the element.

n is the number of elements.

The probability of failure of a system of series-connected elements is found by the formula:

2) parallel connection of elements


Parallel connection of the elements increases the reliability of the system.

The reliability of the system with parallel connection of elements is determined by the formula:

where q is the degree of unreliability of the element

the probability of failure with parallel connection of elements is determined by the formula:

3) Combined connections.

There are two Schemes of combined connections of elements.

Scheme (1) - reflects the reliability of the system when two subsystems are connected in parallel, when each of them consists of two series-connected elements.

Scheme (2) - reflects the reliability of the system when two subsystems are connected in series, when each of them consists of two parallel-connected elements


The reliability of the system when two subsystems are connected in parallel, when each of them consists of two series-connected elements, is determined by the formula:

The reliability of the system when two subsystems are connected in series, when each of them consists of two parallel-connected elements, is determined by the formula.

Statistical methods (methods based on the use of mathematical statistics) are effective tool collection and analysis of quality information. The use of these methods does not require large expenditures and allows, with a given degree of accuracy and reliability, to judge the state of the studied phenomena (objects, processes) in the quality system, to predict and regulate problems at all stages. life cycle products and on the basis of this to develop optimal management decisions. The need for statistical methods arises, first of all, in connection with the need to minimize the variability of processes. Variability is inherent in virtually all areas of quality assurance. However, it is most typical for processes, since they contain many sources of variability.

One of the main stages psychological research- quantitative and meaningful analysis of the results obtained. A meaningful analysis of the research results is the most significant, difficult and creative stage. The use of statistics in psychology is a necessary component in the process of data processing and analysis. He offers only quantitative arguments that require substantive justification and interpretation.

Conventionally, all methods can be classified on the basis of commonality into three main groups: graphic methods, methods for analyzing statistical aggregates, and economic and mathematical methods.

Graphical methods based on the use of graphical tools for the analysis of statistical data. This group can include methods such as a checklist, Pareto chart, Ishikawa chart, histogram, scatter chart, stratification, control chart, time series graph, etc. These methods do not require complex calculations, can be used both independently and in complex with other methods. Mastering them is not difficult not only for engineering and technical workers, but also for workers. However, these are very effective methods. It is not for nothing that they find the widest application in industry, especially in the work of quality groups.

Methods for analysis of statistical populations serve to research information when the change in the analyzed parameter is random. The main methods included in this group are: regressive, variance and factorial types of analysis, method of comparison of means, method of comparison of variances, etc. These methods allow: to establish the dependence of the studied phenomena on random factors, both qualitative (analysis of variance) and quantitative (correlation analysis); explore the relationship between random and non-random variables (regression analysis); identify the role of individual factors in changing the analyzed parameter ( factor analysis) etc.

Economic and mathematical methods are a combination of economic, mathematical and cybernetic methods. The central concept of the methods of this group is optimization, i.e., the process of finding best option from the many possible, taking into account the accepted criterion (criterion of optimality). Strictly speaking, economic and mathematical methods are not purely statistical, but they widely use the apparatus of mathematical statistics, which gives reason to include them in the considered classification of statistical methods. For purposes related to quality assurance, from a fairly extensive group of economic and mathematical methods, the following should be distinguished first of all: mathematical programming(linear, non-linear, dynamic); planning an experiment; simulation modeling: game theory; queuing theory; scheduling theory; functional cost analysis, etc. This group can include both Taguchi methods and the Quality Function Deployment (QFD) method.

Signs and variables

Signs and variables are measurable psychological phenomena. Such phenomena can be: the time of solving the problem, the number of mistakes made, the level of anxiety, the indicator of intellectual lability, the intensity of aggressive reactions, the angle of rotation of the body in conversation, the indicator of sociometric status, and many other variables.

The concepts of attribute and variable can be used interchangeably. They are the most common. Sometimes, instead of them, the concepts of an indicator or level are used, for example, the level of persistence, an indicator of verbal intelligence, etc. , high level of intelligence, low rates of anxiety, etc.

Psychological variables are random variables, since it is not known in advance what value they will take.

The characteristic values ​​are determined using special measurement scales.

Measurement scales Dimension is the assignment of numeric forms to objects or events according to certain rules. classification of types of measurement scales:

Nominative scale (name scale)–Objects are grouped into different classes so that within the class they are identical in terms of the measured property.

Ordinal scale (rank)- assigning numbers to objects, depending on the severity of the measured feature.

Interval scale (metric) - This is a measurement in which the numbers reflect not only the differences between objects at the level of manifestation of sv-va, but also how much more or less sv-va is expressed.

Variables is something that can be measured, controlled, or changed in research. Variables differ in many aspects, especially the role they play in research, the scale of measurement, etc.

Independent variables variables are called that are varied by the researcher, while dependent variables are variables that are measured or recorded.

Discrete is a variable that can only take values ​​from a certain list of certain numbers. Continuous we will consider any variable that is not discrete.

Qualitative- data that register a certain quality possessed by an object.

Statistical Science Subject

The role and significance of statistics as a science

Statistics is a branch of human activity aimed at collecting, processing and analyzing national economic accounting data. Statistics itself is one of the types of accounting (accounting and operational and technical).

Statistics appeared as a science for the first time in China in the 5th century BC, when it became necessary to calculate state lands, treasury, population, etc. Associated with the birth of the state. Its further development statistics received during the formation of capitalism: factories, factories, agriculture, international trade etc. Statistics have undergone profound changes both during the years of socialism and at the present time. Basis for the development of techniques, methods of Art. the prerequisites for the development of the public and private sectors appeared.

The term was introduced into science by a German. scientist Gottfried Achenwal, to-ry in 1746 began to read a new discipline in Marbuk, and then in the University of Gettengen, which he called "statistics".

· Massive social eq. phenomena

· Indicators commercial activities

The subject of statistics is the study of social phenomena, dynamics and directions of their development. With the help of statistical indicators, this science determines the quantitative side of a social phenomenon, observes the regularities of the transition from quantity to quality on the example of a given social phenomenon, and on the basis of these observations analyzes the data obtained in certain conditions of place and time. Statistics investigates socio-economic phenomena and processes that are massive in nature, studies many of the factors that determine them.

STATISTICAL METHODS - scientific methods for describing and studying mass phenomena that allow for quantitative (numerical) expression

Statistical methods include both experimental and theoretical principles. Statistics come primarily from experience;

Statistical methods of data analysis are used in almost all areas of human activity. They are used whenever it is necessary to obtain and substantiate any judgments about a group (objects or subjects) with some internal heterogeneity.

It is advisable to distinguish three types of scientific and applied activities in the field of statistical methods for data analysis (according to the degree of specificity of methods associated with immersion in specific problems):

a) development and research of methods general purpose, without taking into account the specifics of the field of application;

b) development and research of statistical models of real phenomena and processes in accordance with the needs of a particular field of activity;

c) the use of statistical methods and models for the statistical analysis of specific data.

The collection of various methods forms a statistical methodology.

Method of the stage of economic-static research

statistical summary and processing

Yerlan Askarov, Associate Professor KazNTU named after K. Satpayeva


Statistical methods play an important role in the objective assessment of the quantitative and qualitative characteristics of the process and are one of the essential elements product quality assurance systems and the whole quality management process. It is no coincidence that the founder of the modern theory of quality management, E. Deming, worked for many years at the Bureau of the Population Census and dealt precisely with the issues of statistical data processing. He attached great importance to statistical methods.

To obtain high-quality products, it is necessary to know the real accuracy of the existing equipment, to determine the correspondence of the accuracy of the selected technological process to the given accuracy of the product, to evaluate the stability of the technological process. The solution of problems of this type is carried out mainly by mathematical processing of empirical data obtained by repeated measurements of either the actual dimensions of the products, or processing errors or measurement errors.

There are two categories of errors: systematic and random. As a result of direct observations, measurements or registration of facts, a lot of data is obtained that form a statistical population and need processing, including systematization and classification, calculation of parameters characterizing this population, compilation of tables, graphs illustrating the process.

In practice, a limited number of numerical characteristics, called distribution parameters, are used.

Grouping center... One of the main characteristics of a statistical population, which gives an idea of ​​the center around which all values ​​are grouped, is the arithmetic mean. It is determined from the expression:

where Xmax, Xmin are the maximum and minimum values ​​of the statistical population.

The variation range is not always characteristic, since it takes into account only the extreme values, which can differ greatly from all other values. More precisely, the dispersion is determined using indicators that take into account the deviation of all values ​​from the arithmetic mean. The main of these indicators is the standard deviation of the observation result, which is determined by the formula

The shape of the probability distribution. To characterize the shape of the distribution, the mathematical model is usually used that best approximates the shape of the probability distribution curve obtained by analyzing the experimentally obtained data.

Normal distribution law. Most of the random phenomena occurring in life, in particular in production and scientific research, are characterized by the presence of a large number of random factors, described by the law of normal distribution, which is basic in many practical studies. However, a normal distribution is not the only possible one. Depending on the physical nature of random variables, some of them in practice may have a different type of distribution, for example, logarithmic, exponential, Weibull, Simpson, Rayleigh, equal probability, etc.

The equation describing the probability density of the normal distribution has the form:


(5)

The normal distribution is characterized by two parameters μ and σ 2 and on the graph it is a symmetric Gaussian curve (Figure 1), having a maximum at the point corresponding to the value X = μ (corresponds to the arithmetic mean X cf and is called the grouping center), and when X → -∞ and X → ∞ asymptotically approaching the abscissa axis. The inflection point of the curve is at a distance σ from the center of location μ. With decreasing σ, the curve is stretched along the ordinate and compressed along the abscissa. Between the abscissas μ - σ and μ + σ there is 68.3% of the entire area of ​​the normal distribution curve. This means that with a normal distribution, 68.3% of all measured units deviate from the mean by no more than σ, that is, they are all within the range + σ. The area enclosed between ordinates drawn at a distance of 2σ on both sides of the center is 95.4% and, accordingly, the same number of population units is within μ + 2σ. Finally, 99.73% of all units are within μ + 3σ. This is the so-called "three sigma" rule, characteristic of the normal distribution. According to this rule, no more than 0.27% of all values ​​of quantities are outside the 3σ deviation, that is, 27 realizations per 10 thousand. In technical applications, when evaluating measurement results, it is customary to work with z coefficients at σ corresponding to 90%, 95%, 99%, 99.9% of the probability that the result will fall within the tolerance range.


Picture 1

Z90 = 1.65; Z95 = 1.96; Z99 = 2.576; Z999 = 3.291.

It should be noted that the same rule applies to deviations of the mean value X cf (?). It also fluctuates in a certain area by three values ​​of the standard deviation of the mean S in both directions, and this area contains 99.73% of all mean values. The normal distribution manifests itself well with a large number of members of the statistical population, at least 30.

Student's distribution. For practice, it is of great interest to judge the distribution of random variables and determine production errors in all manufactured products and errors in scientific experiments based on the results of measuring the parameters of a statistical population obtained from a batch of small volume. This technique was developed by Karl Gosset in 1908 and published under the pseudonym Student.

The Student's t-distribution is symmetric, but more flattened than the normal distribution curve, and therefore elongated at the ends (Figure 2). Each value of n has its own t-function and its own distribution. The coefficient z is replaced in the Student's distribution by the coefficient t, the value of which depends on the given level of significance, which determines how much of the realization can be outside the selected area of ​​the Student's distribution curve and the number of products in the sample.


Picture 2

For large n the Student's t distribution asymptotically approaches the standard normal distribution. With an accuracy acceptable for practice, we can assume that for n? 30, Student's t distribution, sometimes called t-distribution, approximated by normal.

t-distribution has the same parameters as normal. This is the arithmetic mean Xav, the standard deviation ? and the standard deviation of the mean S. Xav is determined by the formula (1), S is determined by the formula (4), and ? according to the formula:


(6)

Accuracy control. When the distribution of a random variable is known, all the features of a given batch of products can be obtained, the average value, variance, etc. can be determined. But the complete set of statistical data for a batch of industrial products, which means the law of probability distribution, can be known only after the manufacture of the entire batch of products. In practice, the distribution law for the entire set of products is almost always unknown, the only source of information is the sample, usually a small sample. Each numerical characteristic calculated from the sample data, for example, the arithmetic mean or variance, is a realization of a random variable, which can take on different values ​​from sample to sample. The control task is facilitated due to the fact that it is usually not required to know the exact value of the difference between random values ​​and a given value. It is enough just to know whether the observed values ​​differ by more than the amount of the permissible error, which is determined by the value of the tolerance. The extension to the general population of estimates made on the basis of sample data can be carried out only with a certain probability P (t). Thus, a judgment about the properties of the general population is always probabilistic in nature and contains an element of risk. Since the conclusion is made on sample data, that is, with a limited amount of information, errors of the first and second kind may occur.

The probability of making an error of the first kind is called the significance level and is denoted by a... Area corresponding to probability a, is called critical, and the region complementary to it, the probability of getting into which is 1-a, is called admissible.

The probability of a Type II error is denoted ? , and the quantity 1-? called the power of the criterion.

The magnitude a is sometimes referred to as the manufacturer's risk, and the value ? called consumer risk.

With probability 1-a the unknown value X 0 of the complete population lies in the interval

(Xsr - Z?)< Х 0 < (Хср + Z?) для нормального распределения,

(Xsr - t?)< Х 0 < (Хср + t?) для распределения Стьюдента.

Limiting extreme values ​​X 0 are called confidence limits.

With a decrease in the sample size with the Student's distribution, the confidence limits expand, and the probability of error increases. Setting, for example, a 5% significance level (a = 0.05), it is considered that with a probability of 95% (P = 0.95) the unknown value X 0 is in the interval

(Хср - t?,:., Хср + t?)

In other words, the required accuracy will be equal to Хср + t?, and the number of parts with a size outside this tolerance will be no more than 5%.

Process stability control. In real production conditions, the actual values ​​of the parameters of the technological process and the characteristics of the manufactured products not only change chaotically due to random errors, but often gradually and monotonously deviate from the specified values ​​over time, that is, systematic errors appear. These errors must be eliminated by identifying and eliminating the causes that cause them. The problem is that under real conditions, systematic errors are difficult to distinguish from random ones. Minor systematic errors without special statistical analysis can go unnoticed for a long time against the background of random errors.

The analysis is based on the fact that when there are no systematic errors, the actual values ​​of the parameters change at random. However, their mean values ​​and basic errors remain unchanged over time. In this case, the technological process is called stable. It is conventionally considered that all products in a given batch are the same. In a stable process, random errors obey the normal distribution law with the center μ = Xo. The average values ​​of the parameters obtained in different batches should be approximately equal to Xo. Consequently, they are all approximately equal to each other, but the value of the current average value Xavt fluctuates in the confidence interval + tS, that is:

(Хср - tS) ≤ Хсрт ≤ (Хср + tS) (7)

The material for the analysis of stability can be the same data that was used to control the accuracy. But they will be useful only if they represent continuous observations covering a sufficient period of time, or if they are made up of samples, selected at regular intervals. The intervals between samples, called in this case samples, are set depending on the observed frequency of equipment disturbances.

At a given level of significance, the average value of Xavr in various current batches can differ by no more than tS from the base Xav, obtained for the first measurement, that is,

/ Хср - Хсрт / ≤ tS (8)

If this condition is met, we can assume that the process is stable and both batches were released under the same conditions. If the difference between the average values ​​in two batches exceeds the value of tS, then it can no longer be considered that this difference is caused only by random reasons. In the process, a dominant constant factor appeared, which changes the values ​​of the parameters of products in a batch according to a certain constant law. The process is unstable and products manufactured in different time, will differ significantly from each other, and this difference will increase over time.

Thus, the discrepancy between the mean values ​​in different lots by more than tS indicates the presence of systematic errors and the need to take measures to detect them and eliminate the causes that cause them. This principle was applied by V. Schuhart in the development of control charts.

Statistical methods of stability analysis can also be applied in situations opposite to those discussed above. If any changes are made to the design of the product or the technological process of its manufacture, then it is required to determine to what extent this will lead to the expected results.

Consequently, it is required to conduct tests, make several samples and statistically process the data. If

/Xsr.st.-Xsr.new./> tS, (9)

Seven Simplest Methods for Statistical Process Research

Modern statistical methods are quite difficult for perception and widespread practical use without in-depth mathematical training of all participants in the process. By 1979, the Japanese Scientists and Engineers Union (JUSE) had brought together seven fairly easy-to-use visual process analysis methods. For all their simplicity, they maintain a connection with statistics and give professionals the opportunity to use their results, and, if necessary, improve them.

Ishikawa's causal diagram. This diagram is a very powerful tool for analyzing the situation, obtaining information and the influence of various factors on the main process. Here it becomes possible not only to identify the factors influencing the process, but also to determine the priority of their influence.


Figure 3

The diagram of type 5M considers such components of quality as "people", "equipment", "material, raw materials", "technology", "management", and in the diagram of type 6M, the component "environment" is added to them (Figure 3).

With regard to the problem of qualimetric analysis being solved,
- for the “people” component, it is necessary to determine the factors associated with the convenience and safety of operations;
- for the “equipment” component - the relationship of the structural elements of the analyzed product with each other, associated with the performance of this operation;
- for the "technology" component - factors related to the performance and accuracy of the operation performed;
- for the component "material" - factors associated with the absence of changes in the properties of materials of the product in the process of performing this operation;
- for the "technology" component - factors associated with reliable recognition of an error in the process of performing an operation;
- for the “environment” component - factors associated with the impact of the environment on the product and the product on the environment.

Types of defects Control data Total
Dents ///// ///// //// 14
Cracks ///// ///// ///// // 17
Going out of tolerance in minus ///// // 7
Going beyond admission plus ///// ///// ///// ///// /// 23
Burn during heat treatment ///// //// 9
Skewed datum surfaces /// 3
Foundry sinks ///// / 6
Roughness mismatch ///// ///// ///// /// 18
Painting defects //// 4
Other ///// // 7
Total 108

Figure 4

Checklists. Checklists can be used both for quality control and for quantitative control; this document fixes certain types of defects for a certain period of time. The checklist is a good statistical material for further analysis and study of production problems and reducing the level of defectiveness (Figure 4).

Pareto analysis. Pareto analysis gets its name from the Italian economist Vilfredo Pareto (1848-1923), who showed that most of the capital (80%) is in the hands of a small number of people (20%). Pareto developed logarithmic mathematical models describing this inhomogeneous distribution, and the mathematician M.O. Lorenz provided graphic illustrations, in particular the cumulative curve.

The Pareto Rule is a "universal" principle that applies in many situations, and no doubt in solving quality problems. D. Juran noted the "universal" application of the Pareto principle to any group of causes that cause a particular consequence, and most of the consequences are caused by a small number of reasons. Pareto analysis ranks individual areas by relevance or importance and calls for identifying and first of all eliminating those causes that cause the greatest number of problems (inconsistencies).

Figure 5

Pareto analysis, as a rule, is illustrated by a Pareto chart (Figure 5), on which the causes of quality problems are plotted on the abscissa in descending order of the problems caused by them, and on the ordinate - in quantitative terms, the problems themselves, both in numerical and accumulated (cumulative) percentage. Let's build a chart based on the data taken from the previous example - a checklist.

The first action area is clearly visible in the diagram, outlining the causes that are causing the most errors. Thus, in the first place, preventive measures should be aimed at solving precisely these problems. Identifying and eliminating the causes that cause the greatest number of defects allows us to spend a minimum amount of resources (money, time, people, material support) to obtain the maximum effect in the form of a significant reduction in the number of defects.

Stratification. Basically, stratification is the process of sorting data according to some criteria or variable, the results of which are often shown in the form of charts and graphs. We can classify an array of data into different groups (or categories) with general characteristics called the stratification variable. It is important to establish which variables will be used for sorting. Stratification is the basis for other tools such as Pareto analysis or scatterplots. This combination of tools makes them more powerful.

Let's take the data from the checklist (Figure 4). Figure 6 shows an example of a defect source analysis. All defects 108 (100%) were classified into 3 categories - by shifts, by workers and by operations. From the analysis of the data presented, it is clearly seen that the largest contribution to the presence of defects is made by shift 2 (54%) and worker G (47%), who works in this shift.

Histograms. Histograms are one of the options for a bar chart that displays the dependence of the frequency of product or process quality parameters falling within a certain interval of values ​​on these values.

Below is an example of plotting a histogram.

For the convenience of calculations and construction, we use the applied computer software package EXCEL. It is necessary to determine the range of geometric dimensions, for example, the diameter of a shaft, the nominal size of which is 10 mm. Measured 20 shafts, the measurement data are given in the first column A (Figure 7). In column B, we arrange the measurements in ascending order, then in cell D7 we determine the size range, as the difference between the largest and smallest measurement values. We select the number of histogram intervals equal to 8. Determine the range of the interval D. Then we determine the parameters of the intervals, this is the smallest and largest inclusive value of the geometric parameter included in the interval.

where i is the number of the interval.

After that, we determine the number of hits of the parameter values ​​in each of the 8 intervals, after which we finally build the histogram.


Figure 7

Scatter plots. Scatter charts are graphs that allow you to identify the correlation (statistical dependence) between various factors affecting quality indicators. The diagram is plotted along two coordinate axes, the value of the variable parameter is plotted along the abscissa axis, and the obtained value of the investigated parameter, which we have at the time of using the variable parameter, is plotted on the ordinate axis, at the intersection of these values ​​we put a point. Having collected a sufficiently large number of such points, we can make an analysis and conclusion.

Let's give an example. The company decided to conduct classes on the basics of quality management. A certain number of workers were trained every month. In January, 2 people were trained, in February 3 people, etc. During the year, the number of trained workers increased and reached 40 by the end of the year. The management instructed the quality service to track the dependence of the percentage of defect-free products presented the first time, the number of complaints received at the plant for products from customers and the energy consumption in the workshop on the number of trained workers. Was compiled table 1 data by month and plotted scatter diagrams (Figures 8, 9, 10). They clearly show that the percentage of defect-freeness increases, we have a direct correlation dependence, the number of complaints decreases, we have an inverse correlation dependence, and the diagrams clearly show a clearly pronounced correlation dependence, which is determined by the accuracy of points and their approach to any precisely defined trajectory, in in our case, it is a straight line. The amount of consumed electricity does not depend on the number of trained workers.

Control charts. Control charts are a special type of diagram, first proposed by W. Schuhart in 1924. They reflect the nature of the change in the quality indicator over time, for example, the stability of obtaining the size of the product. In essence, control charts show the stability of the technological process, that is, finding the average value of a parameter in the corridor of acceptable values, consisting of the upper and lower tolerance limits. The data from these cards can signal that the parameter is approaching the tolerance limit and it is necessary to take proactive actions even before the parameter enters the scrap zone, that is, this control method allows you to prevent the appearance of scrap even at the stage of its inception.

There are 7 main types of cards.

    Deviation of the standard deviation of the mean x-S,

    Range deviations x-R,

    Deviations of individual values ​​x,

    Fluctuations in the number of defects C,

    Fluctuations in the number of defects per unit of product u,

    Fluctuations in the number of defective product units pn,

    Fluctuations in the proportion of defective products p.

All cards can be divided into two groups. The first one controls the quantitative parameters of quality, which are continuous random variables - dimensions, mass, etc. The second is for control of high-quality alternative discrete parameters (if there is a defect - there is no defect).

table 2



for instance card x-s... Fluctuations in the arithmetic mean, the tolerance band here is the value 3S (for the normal distribution) or tS (for the Student's distribution), where S is the standard deviation of the mean. The middle of the corridor is the arithmetic mean of the first measurement. The values ​​of this card are the most reliable and objective. General form The control chart is shown in Figure 11.

Literature:

1. Askarov E.S. Quality control. Tutorial... Edition 2. Almaty, Pro servis, 2007, 256 p.


They are described in sufficient detail in the domestic literature. In the practice of Russian enterprises, however, only a few of them are used. Consider further some methods of statistical processing.

General information

In the practice of domestic enterprises, predominantly statistical control methods... If we talk about the regulation of the technological process, then it is noted extremely rarely. Application of statistical methods stipulates that a group of specialists who have the appropriate qualifications is formed at the enterprise.

Meaning

According to the requirements of ISO ser. 9000, the supplier needs to determine the need for statistical methods that are applied in the design, regulation and validation process. production process and product characteristics. The techniques used are based on the theory of probability and mathematical calculations. Statistical methods of data analysis can be implemented at any stage of the product life cycle. They provide an assessment and consideration of the degree of heterogeneity of the product or the variability of its properties in relation to the established ratings or required values, as well as the variability of the process of its creation. Statistical methods are methods by which it is possible, with a given accuracy and reliability, to judge the state of the phenomena that are being investigated. They make it possible to predict certain problems, to develop optimal solutions based on the studied factual information, trends and patterns.

Directions of use

The main areas in which are widespread statistical methods are:


Practice of developed countries

Statistical methods are base ensuring the creation of products with high consumer characteristics... These techniques are widely used in industrialized countries. Statistical methods are, in fact, the guarantors of consumers receiving products that meet the established requirements. The effect of their use is proven by practice. industrial enterprises Japan. It was they who contributed to the achievement of the highest production level in this country. Years of experience foreign countries shows how effective these techniques are. In particular, it is known that the Hewlelt Packard company, using statistical methods, was able to reduce in one of the cases the number of defects per month from 9,000 to 45 units.

Difficulties of implementation

In domestic practice, there are a number of obstacles that prevent the use of statistical research methods indicators. Difficulties arise due to:


Program development

It must be said that determining the need for certain statistical methods in the field of quality, choosing, mastering specific techniques is a rather difficult and lengthy work for any domestic enterprise... For its effective implementation, it is advisable to develop a special long-term program. It should provide for the formation of a service whose tasks will include the organization and methodological guidance of the application of statistical methods. Within the framework of the program, it is necessary to provide equipment with appropriate technical means, training of specialists, determine the composition of production tasks, which should be solved using the selected techniques. It is recommended to start mastering by using the simplest approaches. For example, you can use known elementary manufacturing. Subsequently, it is advisable to move on to other techniques. For example, it can be analysis of variance, selective processing of information, regulation of processes, planning of factorial research and experiments, etc.

Classification

Statistical methods of economic analysis include different tricks. It is worth saying that there are quite a few of them. However, K. Ishikawa, a leading expert in the field of quality management in Japan, recommends using seven basic methods:

  1. Pareto charts.
  2. Grouping of information according to common characteristics.
  3. Control charts.
  4. Causal diagrams.
  5. Histograms.
  6. Checklists.
  7. Scatter plots.

Based on his own experience in the field of management, Ishikawa claims that 95% of all issues and problems in the enterprise can be solved using these seven approaches.

Pareto chart

This one is based on a certain ratio. It has been called the "Pareto principle". According to him, out of 20% of the causes, 80% of the effects appear. in a visual and understandable form shows the relative influence of each circumstance on the general problem in descending order. This impact can be investigated on the number of losses, defects, provoked by each cause. The relative influence is illustrated with the help of bars, the accumulated influence of the factors by the cumulative line.

Causal diagram

On it, the problem under study is conventionally depicted in the form of a horizontal straight arrow, and the conditions and factors that indirectly or directly affect it - in the form of oblique ones. When constructing, one should take into account even seemingly insignificant circumstances. This is due to the fact that, in practice, there are quite often cases in which the solution of the problem is ensured by excluding several seemingly insignificant factors. The reasons that affect the main circumstances (of the first and subsequent orders) are depicted on the diagram with horizontal short arrows. The detailed diagram will be in the shape of a fish skeleton.

Grouping information

This economic-statistical method is used to order the set of indicators that were obtained when evaluating and measuring one or more parameters of an object. Typically, this information is presented in the form of an unordered sequence of values. These can be the linear dimensions of the workpiece, the melting point, the hardness of the material, the number of defects, and so on. On the basis of such a system, it is difficult to draw conclusions about the properties of the product or the processes of its creation. The ordering is done using line graphs. They clearly show the changes in the observed parameters over a certain period.

Checklist

As a rule, it is presented in the form of a table of frequency distribution of the occurrence of the measured values ​​of the parameters of the object in the corresponding intervals. Checklists are drawn up depending on the goal of the study. The range of values ​​of the indicators is divided into equal intervals. Their number is usually chosen equal to the square root of the number of measurements performed. The form should be simple to avoid problems when filling out, reading, checking.

bar graph

It is presented in the form of a stepped polygon. It clearly illustrates the distribution of measurement values. The range of established values ​​is divided into equal intervals, which are plotted along the abscissa axis. A rectangle is drawn for each interval. Its height is equal to the frequency of occurrence of the value in the given interval.

Scatter plots

They are used to test the hypothesis about the relationship of two variables. The model is constructed as follows. On the abscissa axis, the value of one parameter is plotted, the ordinate is another indicator. As a result, a point appears on the chart. These steps are repeated for all values ​​of the variables. If there is a relationship, the correlation field is elongated, and the direction will not coincide with the directionality of the ordinate axis. If there is no constraint, it is parallel to one of the axes or will have the shape of a circle.

Control charts

They are used to evaluate a process over a specific period. Formation of control charts is based on the following provisions:

  1. All processes deviate from the specified parameters over time.
  2. The unstable course of the phenomenon does not change by chance. Deviations that go beyond the expected limits are not accidental.
  3. Individual changes can be predicted.
  4. A stable process can occasionally deviate within its intended boundaries.

Use in the practice of Russian enterprises

It should be said that domestic and overseas experience shows that the most effective statistical method for assessing the stability and accuracy of equipment and technological processes is the drawing up of control charts. This method is also used when regulating production potential capacities. When constructing maps, it is necessary to correctly select the parameter under study. It is recommended to give preference to those indicators that are directly related to the purpose of the product, can be easily measured and which can be influenced by the regulation of the process. If such a choice is difficult or not justified, you can evaluate the values ​​correlated (interconnected) with the controlled parameter.

Nuances

If it is economically or technically impossible to measure indicators with the accuracy required for mapping by a quantitative criterion, an alternative indicator is used. Associated with it are terms such as "marriage" and "defect". The latter is understood as each separate non-compliance of the product with the established requirements. A defect is a product that is not allowed to be provided to consumers due to the presence of defects in it.

Peculiarities

Each type of card has its own specifics. It must be taken into account when choosing them for a particular case. Quantitative maps are considered to be more sensitive to process changes than those using an alternative attribute. However, the former are more laborious. They are used for:

  1. Debug the process.
  2. Assessment of the possibilities of technology implementation.
  3. Checking the accuracy of the equipment.
  4. Definitions of tolerances.
  5. Mappings of several valid ways to create a product.

Additionally

If the process disturbance differs by the offset of the monitored parameter, it is necessary to use X-cards. If there is an increase in the scatter of values, the R or S-model should be chosen. However, it is necessary to take into account a number of peculiarities. In particular, the use of S-maps will make it possible to more accurately and quickly establish the disorder of the process than R-models with the same ones. At the same time, the construction of the latter does not require complex calculations.

Conclusion

In economics, it is possible to investigate the factors that are found in the course of qualitative assessment, in space and dynamics. With their help, you can perform predictive calculations. To statistical methods economic analysis do not include methods for assessing the cause-and-effect relationships of economic processes and events, identifying promising and unused reserves for increasing the effectiveness of activities. In other words, factorial techniques are not included in the number of approaches considered.