The Value of Testing
All tests, exams, and assessments (referred to, collectively, as "tests" here) are carefully developed following industry best practices and standards that are based on years of research on is required to ensure the validity and reliability of the test. All test developers take the process of designing and developing tests seriously, gathering input from industry professionals and subject matter experts, known collectively as SMEs, throughout the test development process. Test development includes the following steps:
- Define the content domain. Experts identify the critical knowledge, skills, and abilities (KSAs) that are required for competence for the target audience of the test. This is typically done through a job task analysis (JTA), task analysis (TA), or similar.
- Define distribution of knowledge, skills, and abilities on the test. These KSAs are evaluated by additional experts who rate the importance of each one; in addition, they may be asked the frequency with which the skill is used or ability is needed. Their evaluation becomes the blueprint that defines the distribution of questions across the content domain.
- Write exam questions. Based on the blueprint, subject matter experts (SMEs) write exam questions to measure the critical KSAs.
- Alpha or technical review. A panel of experts who did not write the items reviews each for accuracy, relevancy, appropriateness useful knowledge, and alignment to the content domain.
- Beta test. The alpha-reviewed items are then pilot tested in a test-like situation known as a â€œbeta exam.â€ This ensures that only the best content is included in the live exam.
- Finalize question pool. The results of the beta exam are psychometrically analyzed for factors such as difficulty, ability to differentiate high and low performers, reliability, and more. Only those questions that meet the test developerâ€™s psychometric criteria will appear on the live exam.
- Set cut score. A panel of experts works with a psychometrician to determine a passing score. This score is based on the KSAs needed to be considered competent in the content domain and on the difficulty of the questions included on the test.
- The test is administered.
- Sustainment/maintenance. After a test is published, its psychometric performance and the ongoing validity and reliability of its questions are continually monitored. If questions are no longer valid, reliable, or performing well, they are removed or replaced, and the test is republished.