I cannot imagine there are too many people in my shoes. I have been to many classes over the years, almost all of them documented here on the blog. I have met a lot of people in those classes. Accountants, police officers, information technology specialists, lawyers, doctors, army reservists, etc. I cannot ever recall meeting a fellow, active teacher in a class (I do recall meeting Paul Carlson of Safety Solutions Academy, a former middle school teacher, in this class ). And there are others at the instructor end of things who have backgrounds in education (Lee Weems and Dr. Sherman House, for example). And, of course, we are fans of Professor David Yamane of Gun Culture 2.0 (and Gun Curious, and Light Over Heat…..he’s so prolific!), an educator who trains fairly regularly.
This lack of professional teachers taking firearms classes is not necessarily a surprise. I have mentioned before (see here) how the teaching profession, in my experience, tends to be populated by those who are at best ambivalent about firearms and self-defense, and many abhor firearms and regard self-defense of any sort as borderline criminal. Indeed, in Paul Carlson’s recent appearance on That Weems Guy podcast with Lee Weems, he mentioned how much of a fish out of water he was being a firearms enthusiast while teaching in public schools. Accordingly, it would certainly be strange to go to a firearms/self-defense class and meet a bunch of students who chose teaching as their avocation.
Being a teacher by trade—and a rare bird in such classes—has put me in a rather unique position. I think it is safe to say that I am one of the few people with a Masters’ of Science in Education (from a top-notch institute of higher learning, if I may be so bold) who has taken such a quantity of classes. Coupled with my educational background is my own professional background, with over twenty years of teaching a variety of subjects to challenging students in one of the more dysfunctional cities in the country.
I started down this training path with full vigor in the spring of 2013. Thus, I am currently entering my tenth year of training. To date, this has added up to nearly 760 hours of training, almost all of which has been done on my own dime. No sponsorships, no free ammunition or travel expenses. I have gotten the occasional class gratis or discounted, but the VAST majority has been using my own, hard-earned money and irreplaceable time.
Because of the combination of my professional background, knowledge, and experience, and the fact that I am not otherwise tethered to any “interest” in the firearms and self-defense world (sponsorships or other monetary connections), I thought it would be interesting to apply my professional knowledge and experience to evaluate those instructors and classes with whom I have had experience.
I began by creating a rubric of sorts to apply to the instructors in order to grade them. I decided to evaluate each instructor in five different categories: Knowledge, Instructional Delivery, Professionalism, Organization, and Flexibility. Each of these areas was further divided into five subcategories. For example, under “Content Knowledge”, one of the sub-categories was: “Is what the instructor presents data-driven?” Under “Professionalism” I included: “Attention to Safety.” Under Organization, I included: “Time Management” and “Was there contact with the student prior to class for logistics?” For each of these subcategories, the instructor could earn 2, 1, or 0 points depending on his performance in this area. Thus, with five categories each with five subcategories, and a maximum value of 2 points for each subcategory, the highest possible score would be 50 points.
I did my best to apply lessons from my educational background (i.e., how would my principal evaluate me? How did my university supervisor evaluate me when I was in graduate school?). Some aspects did not apply, or were at least difficult to evaluate given the subject matter, the venues, the time allotted, etc. But I tried to make the rubric as unbiased as possible. My goal was certainly not to set up an evaluation tool to make sure that my “favorite” instructors did well while my least favorite did not. Of course, the better way to have done this would have been to create the tool and then take 750 hours of classes. Alas, the idea to do such an evaluation did not come to me until recently. Having said that, I was kind of surprised that some of my “favorite” instructors did not fare as well in my rubric as some others.
The rubric was created by me and me alone. I am human. I make mistakes. Despite my efforts to avoid biases, they may still be present. There is no guarantee that this rubric is the best way to evaluate the instructors. Indeed, I must confess that I am not entirely satisfied with the rubric, and may rework it at some point and then score everyone again.
I must also confess that, at times, I was fighting with my own memory about classes that, as noted, in some cases took place eight or nine years ago. Believe it or not, I do not remember everything, and my notes and AARs do not always have every detail. I did the best I could with what I had.
Some instructors had some distinct advantages. For example, there were some instructors from whom I took several courses. Thus, they had several chances to score higher than an instructor from whom I took fewer courses. One way I tried to mitigate that advantage was to grant “partial credit”. For example, if an instructor demonstrated drills or techniques in one class but failed to do so in another, then that instructor would earn 1 point in that area rather than 2. Not a perfect solution, but it helped soften the advantage an instructor might otherwise get.
I am not going to divulge the scores of every instructor here. However, a few notes.
First, almost all of the instructors with whom I have trained scored well. Exactly two-thirds of the instructors scored 40 points or higher, and only one instructor/company scored less than 30 points. Why? I would like to think that I did a decent job of vetting instructors before ever registering for their classes. I read others’ reviews, I listened to the instructors on podcast appearances (it was one such appearance by Greg Ellifritz on Ballistic Radio that convinced me he was someone I wanted to train with), watched them on YouTube or other video platforms (it was a YouTube video that convinced me that Mike Pannone was someone I wanted to train with). In short, I was rarely disappointed by an instructor/class, and even on those occasions when a class did not completely “deliver”, it was never a wholesale disappointment.
Three instructors tied for the top score: Paul Howe of Combat Shooting and Tactics, Chuck Haggard of Agile Training and Consulting, and Kerry Davis of Dark Angel Medical. There was a five-way tie for fourth place: Greg Ellifritz of Active Response Training, Tim Chandler and Ashton Ray of 360 Shooting Performance Shooting, Will Petty of Centrifuge Training, Craig Douglas of Shivworks, and Joe Weyer of Weyer Tactical and Alliance Police Training. Rounding out this upper tier of instructors, tied for ninth place, were Tom Givens of Rangemaster, Mike Green and his Green Ops crew, and John Murphy of FPF Training.
Obviously, they did the best because they scored the highest on the rubric. But were there any innate, non-quantifiable attributes that I think helped them do better? As I look at the list of names, I would start by saying that these are all smart people! I would be just as happy to speak about history, or movies, or world events, or sports, or cars, with any of these people as I would those things “tactical”. Secondly, each of them goes beyond being “an instructor”. As a teacher, I feel honored to bestow upon each of them the title of “teacher”. All of them have worked their crafts to a high degree (and, it is obvious, continue to work to develop even further). They utilize a variety of teaching modalities. They demonstrate drills. They take the time to address student questions and concerns. They give incredible amounts of attention to safety. In short, they are professionals.
Readers may wonder who scored poorly. The sole scorer in the 20 point range was one of the instructors from Suarez International (Jack Rumbaugh). Though I never trained with Gabe Suarez himself, I found one of his minions more or less adequate, but I found Jack (who, at the time, was the head of training for Suarez International) to be laughably bad. A combination of lack of professionalism (which included missing flagrant safety issues and wasting class time by sharing stories of wooing flight attendants), poor time management, and questionable curricular goals and objectives created little mystery about who would score the worst. So it goes.
What else? As noted, a few of my “favorite” instructors who I would highly recommend did not score in this upper tier according to my rubric. What does this mean? To me it means that quality instruction is more than just the sum of a bunch of data points. While there is a science to teaching well, and there are certainly (dare I bust out the jargon here?) “research-based best practices”, there is also an art to teaching well. Just as there are some people who are more athletic or more intelligent than others, so I believe that some people are born to teach. For these rare few, it does not matter much whether they incorporate PowerPoint or manipulatives or lecture or any other teaching technique. They are just good. Probably the best way to measure how good they are is to track the progress of their students rather than whether or not they present material through multiple modalities.
In the end, this endeavor was meant more as an interesting thought exercise than anything else. It’s not like I am going to take future classes, run the instructor/class through my rubric, and make the score the determining factor on whether or not I think the class was “good” or recommend that instructor to others. Consider this merely the random musings of a quasi-academic.
As I continue to downshift at the decade-in-training mark, anticipating less formal training this year, I may tweak the rubric a bit more and then run all of the instructors through the updated version. However, I would not expect any such adjustments to the rubric to have major implications on the scoring. Someone might move up or down slightly, but I would not expect wholesale changes. I may keep our readers abreast of any such endeavors on my part.
As always, thanks for reading. Feel free to comment or ask questions below, as we always welcome civil discourse. Please remember to support us via our Amazon Affiliate link at the top of the page, as it is truly the only compensation we receive.