Thursday, August 28, 2014

Evaluating Quality Control

I received an email recently that asked me how I evaluate quality control.   This is a very, very hard thing to do, but when it is possible I will let you know.

First, everything produced by man, whether it is something as complex as a state of the art luxury car or as simple as brown paper bag from the grocery store, is subject to manufacturing errors.  A single lemon does not equal bad quality control.  People on forums seem to disagree with this sometimes, using a difficult-to-refute performative proof logic--if I got a defect, they must not have good QC.  But the truth of the matter is that no matter the skill or the scale of the producer, errors will occur and a single error or a few errors (depending on the scale of production) do not indicate poor quality control.  In industrial production, Lego is often cited in books on management and business as having the best QC around.  They produce literally billions of small items, all of which must be precisely made in order to work, and many of which have to be bundled together is specific ways to make the final product.  They have to do this quickly and efficiently to make sure they can turn a profit margin selling these tiny things (relatively) cheaply.  Despite this, the error rate coming out of Lego is regularly cited as 13 manufacturing errors per 1 million bricks made.  Think about that for a second.  13 out of 1,000,000.  That is below the error  accepted for convictions in criminal cases ("beyond a reasonable doubt" is often explained as 99% sure or 1 in a 100, Lego's error rate would be .0013 out of a 100, a much smaller rate of error).  An error rate of 0 is not possible when you have an endeavour performed by humans or machines made by humans, so the idea that one bad knife or light equals poor QC is absurd, despite the seeming power of the logic used.

So how do I evaluate QC problems or design flaws?  Its not easy, and I have to do it indirectly most of the time, but here is how I do it.

In some instances, its clear from the number of reported problems and the source of reported problems that there is a QC issue.  The most recent example I can think of is, the Elmax steel controversy.  Whatever you think about Cliff Stamp, it is pretty obvious that he is really methodical when it comes to his blades. The man keeps journals about sharpening angles for given knives.  He hunts down and consolidates CATRA numbers for steel.  He is a polarizing figure, but he is a good source of information.  He initially pointed out that ZT's heat treat on the first run of Elmax blades left they prone to rolling and dulling.  I noted this in my review of the ZT0560.  He put it out there and then not one or two people agreed (you can find agreement between one or two people on the internet regarding just about anything), but dozens of people agreed and showed pictures of problems.  This is the first form of QC evaluation--good and many sources complaining about a problem.

The second way I evaluate QC is by tracing design improvements and changes.  I noted in my review of the Strider PT CC that the lock face geometry changed and that the pivot design changed. Both of these things indicated a problem with previous designs.  This sort of iterative upgrading is common in the knife and light world.  When changes occur that aren't "materials upgrades" like better steel or a new emitter, it can (but not always) point to problems with the original production models.  Spyderco does this all of the time--the molded clip on the Delica, Endura, and Dragonfly, all gave way to steel clips in their iterative upgrade process.  They even have a name for it Constant Quality Improvement.  This behavior, displayed by both Strider and Spyderco, is a sign there were problems with the original, but it is also a sign of a superior maker. Everything could be made better and the fact that these two companies are always doing that tells you a good deal about why they are so well respected in the gear community (their knives, that is).  

The third way I evaluate QC is probably the easiest--recalls.  So few companies that make gear we are interested in have products subject to recall, but some do.  Gerber, for instance, has had many product recalls.  The Instant was recalled within a year of its very high profile launch because the button lock failed at inopportune times.  Their parang would break off.  And there are others.  The reality is that this many recalls spread out over many different designs indicates a problem with QC and given the scope, it indicates a company-wide problem with QC.  Fixed blades snapping in two is not like "my clip broke off" or "my frame lock has blade play".  This indicates a serious lapse in QC and is one of the reasons I don't really bother to review Gerber gear and regularly bash the company.  

Fourth, and rarest of all, is direct company input.  I have been fortunate enough to knows lots of folks that know way more than I do about gear production and every once in a while I will learn about problems with OEMs or other things of the sort.  It hasn't happened with a piece of gear I have reviewed, but if it does, you'll know.  

One flawed version of something is not a QC issue, but it might be indicative of one.  Its hard to evaluate, because of my distinct lack of sample size (usually only a single piece).  But in some instances when I have had multiple pieces (like I have for a review I am working on right now) I feel comfortable saying my experience is indicative of poor QC.  That is VERY rare.  Lemons occur everywhere, even in custom lights and knives, evaluating QC requires you to not focus on a single piece, but on the production run as a whole, and generally that's difficult.  


  1. It's awesome that you're discussing this, aiming to nail down an explicit approach to evaluating and reporting on QC.

    I'd really like to hear your thoughts about the knee-jerk routine periodically seen on BF (and some other knife fora) when someone posts about quality defects in the knife they just bought. A lot of knife forumites actually seem to think it's somehow bad form to notify the community of a maker's QC problems, unless one has first gone through a full warranty claim and resolution.

    IMO that's bizarre. The knife community is broken to the extent it accepts such an anti-consumer discourse norm.

    Indeed, the whole manufacturer-fluffing vibe on some knife forums would be openly disdained in other gear enthusiast cultures. Most of the generalist gun forums I've belonged to would respond with contempt if a member tried to jump on a disappointed owner of a jacked up pistol with the "H8r! / Agenda! / How dare you post when you didn't UPS it in and wait 4 weeks first!" routine.

    Sure, warranty service is nice, but it also matters HUGELY whether companies are getting it right out of the box. I don't want to have to send the d*mn knife in for warranty. Posts about observed defects are how we can learn about the larger patterns of QC problems you mention in the post.

    I don't think it is economically unrealistic to expect knife companies to do some pre-shipping QC and weed out gross failures when it's a matter of a single $100 knife as opposed to 500 Lego blocks at $0.04 per or whatever.

    And many aspects of knife forums need reform.

    1. Great comment. I don't dwell fora that much, but I too have noticed that, at every "discussion" (if you can call it such) about broken or seriously flawed knife/multitool, the first comments are usually have you sent it (at your own cost, both money and time) for warranty, and/or straightforward bashing of the misfortunate customer.

      I believe that is because of absurdely high prices of knives and multitools. These prices have caused most gear geeks to find ways to justify their purchase which are nothing but defense mechanisms, and there you have it - stocholm syndrome, love for overcharging companies.

      Personally it took me 3 spyderco lockbacks to conclude they are shit at it. Up and down bladeplay makes huge difference when cutting through anything thicker than a peice of paper, and makes spartanic Opinel carbon feel sturdy like woodcutters axe compared to said knives. And that is a design flaw more than QC - a real QC problem is even harder to accept, which makes bashing whoever makes such wild accusations even more tempting.

    2. I'm glad someone said it about spyderco lockbacks. It really is unacceptable. A df2 is about 40 bucks and the zdp version is $70. Why are we accepting blade play at those prices? My df2 has an obscene amount of vertical blade play.

  2. Then there is the Kershaw Cryo.... (O_o)

  3. Both my ladybugs (salt and gray vg10 versions) also have obscene amount of blade play. Salt version (and no, l didn't get a lemon - all the salt ladybugs at my importer had that defect) wasn't even riveted right - had to hammer it to keep the blade in place. The gray version is downright bad, screw construction and so much blade play you can literally feel the pivot moving up and down. It's a shame, as it renders their otherwise capable hollow grinds useless. And that's a 50$ knife.
    The delica l owned also had vetical blade play, but the blade shape and steel compensated that, at least to some extent.
    Oddly, the all steel Robyn2 had only horizontal blade play - and the lock bar was wider than the blade on that one

  4. The Waved Delica I had developed a lot of blade play over time. As well the lock back failed even after I regularly cleaned and lubed it. A test with a simple spin whack causes a simple dis-engagement of the blade from the lock. The knife was not used hard and was mostly for opening boxes or very simple cutting chores. Unacceptable.

  5. As a QC tech for a defense company the answer to QC is "it depends"...on a variety of "what ifs"...
    When testing bolts for hardness we did an initial sample of the quantity...if the number tested passed we moved them on to manufacturing. If the test sample failed we did further testing on a larger sample. If they passed they were moved on after supervisor involvement/approval.

    I'm curious how Elmax production blades were tested...not for hardness, that is very easy using Rockwell Hardness test gear...but for edge rolling and dulling...what is the test criteria, what are the optimal edge angles, how much "use" is considered "normal" for dullness and rolling of an edge to occur?

    I'd guess that their QC is for hardness after heat treat before further manufacturing of the blades. I'd also guess that their QC is mainly "visual inspection" looking for flaws in the polish, obviously uneven edges, etc. But I have no idea how their criteria for QC works.
    It also depends on what is considered "acceptable" for the normal use of the item...knife blade in this case. I'd guess visual flaws have a higher priority than hardness, etc. as the average person is never going to put the knife to use where material flaws will occur or those flaws may be attributed to the user by the user and considered normal.
    It is the very small minority of "knife nuts" like us that consider them important.

    Nice "Commentary" I enjoyed it quite a bit.