Partnership for Assessment of Readiness for College and Careers | PARCC
Partnership for Assessment of Readiness for College and Careers

Clarifying Language and Terms

This glossary is written to clarify terms used frequently in discussions around the new Common Core aligned-assessments being developed by PARCC. These are not uniquely PARCC terms but reflect the language PARCC is using in the development of the assessments.

Many of the conversations about the new tests can be technical in nature, but at the heart of all of this communication is a shared desire to give every student a great education to help them prepare for success in college, careers and life.

Accessibility Features

Embedded supports (see below) available to all students during a computer-based test. Allowing students to adjust the background color or contrast of the screen are examples. Educators have to activate specific accessibility features prior to a test, based on a student’s personal needs profile, which is designed to ensure students receive appropriate access to tests without the distraction of features they don’t need.

Accommodation

Practices and procedures that provide students with disabilities equitable access to instructional materials and assessments. Below is a brief description of each category: 
  • Presentation accommodations change the method or format in which a test is provided to students. These may include the use of Braille, for example.
  • Response accommodations allow for changes in the way students can answer test questions. Dictation is an example.
  • Timing and scheduling accommodations include extending the time allowed for testing or allowing a student to take frequent breaks.


Analytic Writing

Writing that uses evidence and logical integration and framing of concepts to advance an argument or convey an idea.

Anchor Text

On the ELA/literacy assessment, students are asked to analyze topics presented through several texts. The first one, which introduces the overall topic, is called the anchor text.

Assessment System

The PARCC assessment system is a cohesive set of tests that students will take during the school year that include summative (performance-based and short-answer questions) and non-summative components (diagnostic, midyear, and speaking and listening tools). This comprehensive and cohesive system will better inform instruction and provide critical information to students, teachers and parents about student learning throughout the school year.

Bias

Errors in test scores that result from parts of the test that are not relevant to the content being measured and that differentially affect the performance of different groups of test takers.

Blueprint

Blueprints are a series of documents that together describe the content and structure of a test.


Claim

A statement about student performance based on how students respond to test questions. PARCC tests are designed to elicit evidence from students that support valid and reliable claims about the extent to which they are college and career ready or on track toward that goal and are making expected academic gains based on the Common Core State Standards. To support such claims, PARCC assessments are designed to measure and report results in multiple categories called master claims and sub-claims.


Complexity

The level of cognitive demand expected for a student to correctly answer a test item. For example, an item or a task requiring students to predict a phenomenon based on data presented in a graph would generally be more complex than an item or task requiring students to simply describe the data presented in the graph.  


Construct

The concept, characteristic or skill a test is designed to measure.


Threshold Score (Cut Score)

A specific point on a score scale that distinguishes between two performance levels. Scores at or above that point are interpreted to mean something different from scores below that point. So, students performing below a certain cut score might demonstrate partial command of material in a given subject, while students performing above the cut score might demonstrate moderate command.


Device

Digital tools that students may use in daily classroom instruction and to take tests, including, but not limited to, desktop computers, laptops, netbooks, tablets and assistive technologies for students requiring accommodations.


Diagnostic Tool

Diagnostic tools are optional and will be available throughout the year. They are designed to measure students’ strengths and weaknesses. Teachers can use these to inform instructional strategies. The results provide educators with information about what standards students have mastered and which ones may need more attention and focus.


Embedded Support

An embedded support is a tool, support, scaffold or preference that is built into the assessment system that can be used by any student, at his or her own discretion. Embedded supports are known as universal design test features. They can be accessed onscreen through a toolbar, a menu or a control panel, as needed. For example, students who take the PARCC assessments will have access to a highlight tool, which will enable them to highlight text, as needed, to recall and emphasize certain material.


Evidence(s)

Information gathered from student responses to test questions that supports claims about student performance.


Evidence Statement

Words or phrases that describe student work and support claims about students’ mastery of particular standards. Evidence statements describe what one can point to in a student’s work to show that the student has mastered a specific standard.


Evidence-Based Selected Response (EBSR)

The term refers to a type of ELA/literacy test item that asks students to show the evidence in a text that led them to a previous answer.


Evidence-Centered Design (ECD)

Evidence-centered design is a systematic approach to test development. The design work begins with developing claims (the inferences we want to draw about what students know and can do). Next, evidence statements are developed to describe the tangible things we could point to, highlight or underline in a student work product that would help us prove our claims. Then, tasks are designed to elicit those.


Fairness in Testing

Fairness in testing is closely related to test validity. Evaluating fairness in testing requires a close look at a range of evidence. This process includes the evaluation of empirical data, but it may also involve consideration of some legal, ethical, political, philosophical and economic issues.


Field Test

A test administration used to examine the psychometric quality of items and obtain critical information about testing procedures. The data collected during a field test help inform test development.


Formative Assessment

Tests designed to provide feedback to teachers so they can adjust instruction to improve student learning. Formative assessments can typically be given on several occasions during the school year. These tests typically yield qualitative feedback (rather than scores) that focuses on the details of a student's performance. Formative assessments are commonly contrasted with summative assessments, which are usually single events used to monitor the educational outcomes at the end of the year.


Growth Modeling

Growth modeling refers to analytical methods used to make evaluative claims about the effectiveness of teachers or schools through aggregation and statistical modeling of student achievement data obtained at multiple points in time.


Item

A statement, question, exercise or task on a test for which the test taker is to select or construct a response or to perform a task.


Learning Standards

Learning standards are written statements of what students should know and be able to do at every grade level. They are also called "content standards." 


Measures of Academic Progress

Measures that describe individual student growth from one year to the next in relation to learning standards that span multiple grades or in relation to the progress of students' peers.


Mid-Year/Interim Module

Optional formative assessments that emphasize hard-to-measure standards.

Model Content Frameworks

The Model Content Frameworks are guiding documents that serve as a bridge between the Common Core State Standards and the PARCC assessments. The frameworks were developed to help design items for the PARCC tests and to support educators' implementation of the Common Core.


Performance-Based Assessments (PBA)

For PARCC, the PBAs in math will focus on reasoning and modeling and include questions that require both short and extended responses. In ELA/literacy, the PBAs will focus on both reading comprehension and writing when analyzing texts.


Performance-Level Descriptors (PLDs)

  • Policy-Level PLDs: Performance levels are the broad, categorical levels used to report student performance on an assessment. Some assessment systems refer to performance levels as "achievement levels." The PARCC policy-level PLDs describe what that performance means and convey the policy implications for each performance level on the PARCC assessments. For example, the policy board setting the standards might require that one of the performance levels indicate readiness for the next grade level. These are usually not grade-level or subject specific.
  • Content-Level PLDs: Content-level PLDs indicate the knowledge, skills and practices that students should be able to demonstrate at each performance level, in each content area (ELA/literacy and mathematics), at each grade. Content and grade level-specific PLDs are designed to inform test item development, the setting of performance level cut scores, and curriculum and instruction at the local level. 


Prose Constructed Response (PCR)

This term refers to a specific item type on the PARCC ELA/literacy assessments in which students are required to produce written prose in response to a test prompt. These measure reading and writing claims.


Released Item

PARCC intends to release a large number of items from its assessments after each administration. The items will help build understanding about the types of performances PARCC expects students to provide to demonstrate mastery of the Common Core State Standards.


Reliability

The degree to which scores for a group of test takers are consistent over repeated applications of a measurement tool.


Rubric

An established set of criteria, including rules, principles and illustrations, that attempt to communicate expectations of quality. PARCC has released a set of rubrics intended to aid educators and item developers. 

 
Scale Score

A numerical score, derived from student responses to test items, that summarizes the overall level of performance attained by that student. Scale scores represent what students know and can do, while performance level results indicate the degree to which student performance meets expectations of what they should know and be able to do. (See also: Performance-Level Descriptors.)


Speaking and Listening Tools

These are for ELA/literacy only. They are designed to indicate students’ ability to communicate their ideas and listen to and comprehend the ideas of others. They also are designed to test how well students can integrate and evaluate information from multimedia sources. This will be a non-summative component of the PARCC assessment system and will be administered in grades 3–11.


Standard Setting

The process used to establish performance (achievement) level cut scores.


Standards for Mathematical Practice

The Standards for Mathematical Practice describe ways in which students ought to engage with mathematics through elementary, middle and high school. Examples of these practices include problem solving, procedural fluency and conceptual understanding. Get more information and a list of the Standards for Mathematical Practice.


Summative Assessment

A summative assessment is designed to measure a student’s knowledge and skills at the end of an instructional period, such as an entire school year or at the conclusion of a course.  


Task

This term has subject-specific meanings. In ELA/literacy, a task is a coherent collection of assessment items. Tasks are cohesive because they are connected to a specific reading passage or set of passages. In math, a task is an operational item that may either have a single prompt or multiple prompts. The PARCC math tests contain three types of tasks:  
  • Type I tasks assess concepts, skills and procedures.  
  • Type II tasks assess students’ ability to express mathematical reasoning.
  • Type III tasks assess modeling and applications. 


Technology-Enhanced Constructed Response (TECR)

This ELA/literacy item uses technology to capture student comprehension of texts in authentic ways that have been historically difficult to capture using current assessments. Examples include using drag and drop, cut and paste, and highlight text features.


Technology-Enhanced Items (TEIs)

TEIs are items administered on a computer and take advantage of the computer-based environment to present situations and capture responses in ways that are not possible on a paper-based test.


Test Form

A compilation of test items and/or tasks that comprise the full assessment.


Universal Design for Assessment

Describes a framework for curriculum design, instructional processes and tests that provides all students with equal opportunities to learn and demonstrate their knowledge and skills. The purpose is to offer tests to as many children as possible and minimize the need for individualized design or accommodations. Universal design builds flexibility into curricula and tests at the development stage, which enhances a teacher’s ability to make adjustments for different learners during classroom instruction. Using these principles, test developers consider the full range of students being tested and develop items, tasks and prompts that measure learning for the greatest number of students without the need for accommodation if possible.
 

Validity

The degree to which accumulated evidence and theory support specific interpretations of test scores.

Vertical Scale

A single, one-dimensional scale that allows for the monitoring and tracking of student growth and progress across grades, over time. Vertical scaling links test of increasing difficulty, along a learning continuum, so educators can get an accurate measurement of a student’s gains over time.  


Stay Informed:

Please enter a valid email address.

Connect:

connect twitterconnect facebookconnect youtubeconnect linkedin