That’s a fair point, and I admitted as much previously. When I tested the module was one that I had taken a number of times in practice already that month. I still wrote out and triple checked all my work. I’ve always prided myself in being precise in my plotting, and there are certain tips and tricks you learn when you’re first taught TNav that make a huge difference in how accurate your plots are. Maybe those plotting tips and tricks are what gave SUNY and GLMA such comparatively high pass rates. It would be interesting to see the historical trends for each of the schools and each of the modules.
Clearly not to the same degree, otherwise why would all the academies be arguing to the contrary?
I guess my main issue with this whole thread is people are acting the like academy instructors don’t know what they are talking about, yet everyone on here who graduated 10 years ago, on average, thinks they know what’s really going on. It’s mildly annoying.
I didn’t plot it out but it looks like the 12.5 kts is only used to compute an ETA to a waypoint. In that case it would be appropriate to use a slip table or the like to estimate STW. Also two positions are given and the SOG comes out as 13.5 kts so that rules out the possibility that 12.5 kts is SOG.
I agree we don’t know what the main issue is here. Most likely, like most things, it’s a combination of factors. The overall degree of difficultly is more relevant than linked / not linked.
A better understanding of how navigation elements fit together as a whole is required to solve a plotting problem never seen before as compared the repetition of working from a known set of problems. That may make the new problems seem more difficult.
One factor may be that the instructors and/or the students have become more complacent having adjusted to knowing the question set. The wide range of results from 0% to 56% hints that it’s not just the test.
That’s gcaptain forum about 80% of the time.
Someone or a group of someone’s is not fully informed, or worse, chooses not to believe that multiple questions aren’t linked anymore for the new 3rd mate chart plots. I wonder why? I participated in the recent chart plot working group at NMC where the new 3rd mate chart plot drafts were worked and commented on by groups composed of academy instructors, licensed industry personnel and actively sailing mariners. The original drafts were linked but linking of more than one question was removed before the new exams were released. The currently posted 3rd mate sample exam appears to be one of those original drafts. Exam passing rates will improve, deck watchstanding competency will benefit and licensed Masters will sleep better at night.
He’s not wrong. Tuition rates across the country have skyrocketed. Why do schools keep using that money to hire so many paper pushing administrators vs instructors?
I see your point. Me personally I taught myself the 3M exam as I didn’t graduate from a school, but it really makes you wonder, where and why are the students failed? Is this really a COVID issue? Seems like the quickest and only assumption, is it a scape goat?
Yeah, I am sure C-19 plays a part, but anytime there is an exam where 0% pass, especially for something people should theoretically be preparing for over years, serious scrutiny must be given to the exam itself.
Even the hawsepipers like yourself had significantly lower pass rates than normal. That should be a massive indication.
Overall, I think the exam should be hard. This is a serious business with enormous responsibility. But these pass rates are indicative of a poorly written exam, in my assessment.
Have hawsepipers pass rates been historically higher than the Academy folks? Why is it such as massive indication?
Every time I took the chart plot exam, I also worked it out via the sailings. That is, mathematically.
I didn’t want a mistake I’d made because of the thickness of a pencil line, or just slightly wrong angle with my protractor, to sink me.
I just took the 3rd Mates exams recently and figured I’d weigh in. From what I saw this time linked questions weren’t such an issue.
Between what I saw this year and from what I heard last year from the people who took the first round of the new chart plots, two big takeaways stick out to me. Read the coast pilot and bring a big triangle.
The chart plot I took seemed to be heavier into reading into the questions than the old chart plots. There was probably three or four questions where you had to open up the coast pilot, compared to one or two on the old tests. Many people got these ones wrong this year as they relied on the information they saw on the chart alone, which seemed by design to be misleading.
On the old chart plots you could get away with using your standard sized triangles, but on the plot I saw and from what I’ve heard from others, you’re very likely going to take a bearing or make a track line from one end of the chart to the other, which is why a 12” triangle or larger comes handy.
One thing that I saw which wasn’t an issue but was confused on the purpose: there was a question that was entirely irrelevant to the plot. It went something like “you’re taking heavy seas broad on the stbd beam, what do you do to reduce rolling? A.) speed up B.) slow down C.) change course D.) idk” I think almost everyone got that one correct but it seemed strange to have a seamanship/deck general question on a chart plot.
This is what I was told by the instructors, yes. I have not seen the data myself.
I think it shows that it’s not just the way the Academy’s approach studying for the exams but rather that the exams themselves were difficult/unrealistic regardless of the study techniques you use.
This showed up in my email tonight. Just in time.
U.S. Coast Guard, Office of Merchant Mariner Credentialing Job Task Analysis
Dear Mariner,
This email is notification of a voluntary survey the U.S. Coast Guard is conducting of deck officers, who hold Merchant Mariner Credentials endorsed for service on limited tonnage or restricted trade vessels.
Within the next few weeks, you will receive an email from JobTaskAnalysis@uscg.mil containing a unique survey link. The link button below is NOT the survey link.
The Office of Merchant Mariner Credentialing is conducting a series of Job Task Analyses to improve the quality of the content of credentialing examinations. As part of this initiative, we ask for your participation in a survey in order to assist us in gaining a better understanding of mariner duties and responsibilities on board their vessels. By completing this survey, you will provide insight into your job tasks that will help and shape improvements to the Coast Guard examination content.
This is an opportunity for you to contribute to improving Coast Guard credentialing examinations. We appreciate your participation in the survey and look forward to receiving your input. If you have questions regarding the survey, please contact us at JobTaskAnalysis@uscg.mil
U.S. Coast Guard
Office of Merchant Mariner Credentialing
So scanty information, gotcha
I know plenty of hawsepipers who used the same techniques (stem/answer), and approaches that academy kids used, we just had it in, what I believe to be, a stricter and more structured environment to promote studying. I applaud those who didn’t navigate through the system that way, because I definitely needed that structure back then. I totally agree with your synopsis about the exams themselves so no argument there! But still am curious why the fella thinks it’s a massive indication when even the hawsepipers have a reduced passing rate.?
Just took CM unlimited tests (Q100-109) and passed. The questions were no different from what I saw in upgradeu, lapware, etc. Not even that much different from what I remember on 3/M exams, maybe just an extra moon or planet thrown in there.
I was practicing the “operational level” chart plots by accident for a few days instead of the “management” level. I also didn’t notice that much other stuff compared to when I took the 3/M exams. There have always been coast pilot look ups. Or you could remember that if the word “fringed” is in an answer (fringed with rocky shoals”) it is always the correct choice. Also, if a choice has “sandy” in it, it’s correct about 80% of the time.
I’m simply saying it isn’t an academy problem if all forms of students/cadets had significantly lower pass rates. Much of this thread is blaming teaching standards and reduced quality of cadets, yet hawsepipers show similarly bad results which is a clear indication that the exam itself should be under scrutiny. That’s all.
Maybe the instructors are wrong. The reality is, unless we have seen this specific chart plot, we really can’t argue with the instructors opinions…they are the only ones who know what was on that specific exam. But the pass rates should at least raise an eyebrow. These are my only points.
Where did you get this from? I am not sure NMC has this data, or the ability to retrieve it. The academy data is known because they all test at the same time in the same. place.
This was from an instructor who has been doing this a long time. I could be wrong, but I would consider him to be a valid source.