campaign
AI writing now includes AI paraphrasing detection!
Learn more
cancel
Blog   ·  

What is the history of exams as a career entry point?

Audrey Campbell
Audrey Campbell
M.A. in Teaching; Senior Marketing Writer

Subscribe

 

 

 

 

By completing this form, you agree to Turnitin's Privacy Policy. Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

 

When one hears “high-stakes testing,” many immediately think of public school testing that has been utilized for years to assess student performance, determine district budgets, and gain entrance into universities and graduate programs.

In fact, since the 1930s, multiple-choice tests and other right/wrong binary questions have been widely used in schools throughout the United States (Ramirez, 2013). “With increased student enrollment came a need for communication between institutions," says Christine Lee. "Grading became more standardized, objective, and scale-based; so did testing, led by multiple-choice exams and the belief that the MCQ (multiple choice questions) format was ‘scientific.’ Testing, too, could not be specific to students or institutions but have meaning to third parties.”

However, if you pull the lens out just a bit further, you can apply the concept of high-stakes testing to a variety of scenarios that place an exam between an individual and a desired goal, most commonly for the purpose of accountability. In the professional world, from computer science positions and teaching to jobs in the medical field and vocational careers, most jobs require training and experience in order to begin work. There are also many roles that demand additional certification, licensure, as well as examinations in order to guarantee across agreed-upon standards that an individual has sufficient know-how to perform a job.

But despite many of us today taking for granted exams as a career entry point, that wasn’t always the case.

Take physicians here in North America, for example. Following the Civil War, many medical schools in the United States had very few requirements for admission, which resulted in an increasing number of physicians practicing without aligned philosophies or even thorough credentials. For a variety of reasons, “[r]ecognizing the value of qualified physicians to help promote the public's health, not to mention the value of qualified engineers and architects, occupational licensing developed anew,” write David Johnson, M.A. and Humayun J. Chaudhry, D.O. in their article “The History of the Federation of State Medical Boards: Part One — 19th Century Origins of FSMB and Modern Medical Regulation.” They go on to say:

“The basic idea was hardly new as colonial officials had licensed multiple trades from physicians to auctioneers to peddlers. The type of licensure familiar to us today for doctors, nurses, pharmacists and lawyers took root in the latter 19th century. While the justification was the same for almost all of the occupations (ostensibly to protect the public) this was perhaps the most obvious and persuasive argument in the case of doctors” (Johnson and Chaudhry, 2012).


After several years of heated deliberation, The Federation of State Medical Boards was finally founded in 1912, marking the start of standardized, required examinations for the medical profession in the United States. Today, we now have mandatory medical exams across the spectrum, including physician’s assistants, nurses, and midwives, for all intents and purposes to “[protect] the public’s health, safety and welfare through the proper licensing, disciplining, and regulation of physicians and, in most jurisdictions, other health care professionals.”

And because the consequences of cheating on such a high-level exam could ruin someone’s entire career or worse, put those that depend on physicians, nurses, etc., at severe risk, then academic integrity within these career-entry examinations is even more important. In essence, the exam becomes an unofficial contract between the individual and the public to say, “By passing this exam, it shows that I am skilled, capable, knowledgeable, and ready to serve you.”

Interestingly, it’s not simply jobs in the medical field that require exams to practice. In 1919, the National Council of Architectural Registration Boards was established to begin the process of implementing, yes, examinations for architectural licensure. And lest we forget the infamous bar examination, the test here in the US that certifies a lawyer to practice law.

In 1763, Delaware became the first jurisdiction to offer an oral “bar exam” and in 1855, Massachusetts became the first state to administer a written bar exam, consisting of only essays. These formal examinations tested both candidates’ skills and knowledge and consisted only of essays until the Multistate Bar Exam (MBE) was added in 1972, which was meant to “create a better grading system as well as an equal and fair opportunity for people seeking careers practicing law,” says Amy Gaiennie.

These career entry-point examinations, like standardized tests used in schooling, can often improve the retention of valuable information by requiring the test-taker to reflect upon and study their materials. They produce granular data and measurable analytics on these careers, examining the performances of different test-takers across socio-economic statuses, ethnicity, and geographies, which in turn can inform the integrity of the tests themselves (Are they fair? Are they accessible?). An if one enters the IT space or seeks a job in the computer science field, many companies may require what is a called a "vendor-specific certification" that may qualify an individual for a more advanced position or higher salary.

A key component of fairness and accessibility hinges on objective, unbiased grading and comes down to how these exams are created, delivered, and scored. Companies like ExamSoft work to efficiently deliver and securely score high-stakes tests worldwide, as well as offer assessment feedback and data insights—like item analysis—in order to improve overall exam quality.

Institutions and organizations depend greatly on these exams to inform their graduation rates, hiring practices, and even promotion opportunities. But the status quo is being questioned daily: many Edtech and software companies are helping to make tests more equitable and uphold integrity, in addition to supporting assessment design, efficacy, and accuracy. And with the Scholastic Assessment Test (SAT) now digital–and for some universities, not required at all– there are signs that innovation around standardized testing, be it for schooling or for career entry, could change yet again what we deem the norm.