Glossary and Resources

Key Terms

Alignment

The deliberate use of measures/assignments that are directly relevant to, and genuinely able to assess, course content and to specific student learning outcomes for the course or program. 

Assessment

The collection, analysis and use of evidence to improve student learning in courses and disciplinary or general education programs.

Assessment Technique

Assessment techniques can be applied to gather information on student learning objectives, in which case we can use direct measures of student learning or indirect measures of student learning. Assessment techniques can also be focused to capture program-level outcomes, including student retention rates, student graduation rates, student ethnicity, community interactions, to name a few.

Authentic assessment

The assessment process is  embedded in relevant real-world activities

Benchmark

The criteria for assessing results compared to an empirically developed standard. An example would be the expectation that two-thirds of all students score a 3 out of 4 on the critical thinking rubric used to evaluate assignments.

Classroom assessment

Assessment to improve the teaching of specific courses and segments of courses.

Close the loop

Faculty discuss assessment results, reach conclusions about their meaning, determine what if any changes need to be made based on assessment results and implement appropriate changes.

Course Assessment

Assessment techniques are brought to bear on the stated learning objectives of a given course. Assessment evidence may be collected during the semester within the course or at some time after the conclusion of the course. Assessment data gathered during the course reflects student progress in achieving stated course objectives, while data gathered after the conclusion of the course provides evidence of the persistence of stated learning outcomes over the time elapsed.

Curriculum Map

Presented in a matrix, a curriculum map relates program-level student learning outcomes (usually enumerated in individual rows) to the courses and/or experiences that students take in progress to graduation (usually captured in columns).

Direct Measures of Student Learning

In contrast to opinion surveys and instruments that gather self-reports of student knowledge and/or ability, direct measures of student learning are generated when student work is evaluated in order to determine their performance on a specific learning outcome. Third-party reports of what students know and can do represent direct measures of student learning when the reports are based on direct observation or review of student work submitted to the third party and are student-specific rather than summarized across a cohort of students.

Embedded Assessment

Also referred to as course-embedded assessment, these techniques generate assessments of course-specific student learning outcomes entirely within the duration of the specific course. There are many assessment techniques that can be applied to routine assignments made within a course that can be summarized across multiple sections and/or multiple semesters to provide evidence of student learning at the program level.

Formative Assessment

Utilizes assessment techniques that emphasize the role of feedback in assessing how students are learning and then using the information to make beneficial changes in instruction and/or the learning environment. Formative assessment usually focuses on a limited set of specific outcomes, often a subset of the complete roster of outcomes identified by a program, and is focused primarily on the improvement of program delivery.

G.E. Assessment

Assessment which occurs in G.E. classes. This means that most students are not majoring in the subject in which they are being assessed.

Holistic Rubric

A rubric that involves one global, holistic judgment.

Indirect Measures of Student Learning

Usually found in opinion surveys and instruments that gather self-reports of student knowledge, indirect measures of student learning are generated when students report on their own progress of learning, what experiences they attribute their learning to, how they feel about what they know, and what students value as a result of their educational experiences. Third-party reports of what students know and can do represent indirect measures of student learning when the reports are summarized across a cohort of students rather than student-specific.

Inter-rater reliability

How well two or more raters agree when decisions are based on subjective judgments.

Major Assessment Report

Initially information on assessment was included in the annual report that was and is still due in May. However, assessment activities are now described in a separate major assessment report that is due September 1st. There is now a template that should be used for this report and it can be found in the assessment reporting section of this website.

Norming or Calibration

Evaluators are normed or calibrated so they consistently apply standards in the same way

Program Assessment

Assessments of student learning and development of program goals and outcomes provide program faculty opportunities to evaluate the effectiveness and status of their academic program at the same time they reflect vital information to use in improving curriculum and instruction. Program assessment is comprehensive across a set of prioritized program outcomes in contrast to course assessment that is limited to course-specific outcomes.

Purposeful Sample

A sample created using predetermined criteria, such as proportional representation of students at each class level.

Random Sampling

A sample in which every element in the population has an equal chance of being selected.

Reliability

The degree of measurement precision and stability for a test or assessment procedure.

Response Rate

The proportion of contacted individuals who respond to a request.

Rubric

An explicit scheme for classifying products or behaviors into categories that are steps along a continuum.

Sampling Validity

How well a procedure’s components, such as test items, reflect the full range of what is being assessed.

Signature Assignment

An assignment, task, activity, project purposefully created or modified to collect evidence for specific learning outcomes. The ‘signature’ part of the assignment is the defining characteristics that reveal deep thinking and help students think like disciplinary experts. Ideally, other coursework builds toward the signature assignment, meaning that the signature assignment should measure the culmination of what the student learned in the course for a particular outcome.  Signature assignments work well when they are course embedded.

Signature assignments can be designed collaboratively by faculty. They can be generic in task, problem, case or project to allow for contextualization in different disciplines or course contexts.

Survey

A questionnaire that collects information about beliefs, experiences, or attitudes.

SOAP

Student Outcomes Assessment Plan. A specific plan created by a department or program that clearly identifies goals and student learning outcomes, as well as specific direct and indirect measures that will be used to assess the department/programs SLO’s. At Fresno State all SOAP’s must be in the required template found in the section, of this website, titled SOAP’s. At Fresno State there are seven required elements for the SOAP or department assessment plan and the final required element is a discussion of closing the loop or how a department will use assessment results and how they will determine if any changes are necessary and implement changes if they are necessary. 

Student Learning Outcome

Student Learning Outcomes are statements that describe significant and essential objectives that learners have achieved, and can reliably demonstrate at the end of a course or program. In other words, learning outcomes identify what the learner will know and be able to do by the end of a course or program. Students Learning Outcomes or SLO's must:

  • reflect essential knowledge, skills or attitudes;
  • focus on results of the learning experiences;
  • reflect the desired end of the learning experience, not the means or the process;
  • represent the minimum performances that must be achieved to successfully complete a course or program

The SLO's must be stated clearly and the description should use the appropriate verb depending on the level of the skill being demonstrated. Basic knowledge can be demonstrated by explaining or describing while an ability to make deductions can be demonstrated by analyzing a point or idea. Bloom's Taxonomy provides specific information on lower and higher order skills and the appropriate terms.

Summative Assessment

Utilizes assessment techniques that emphasize the comprehensive achievement of program outcomes across comparatively large student cohorts. While summative and formative assessment need not be mutually exclusive, the tenor of summative assessment is to provide evidence of accountability and achievement of comprehensive program outcomes compared to formative assessment, which focuses feedback to improve program delivery.

Triangulation

Multiple lines of evidence lead to the same conclusion.

Valid Measures

Assignments that successfully measure what they are supposed to measure and align with an institution’s goals and objectives.

Value Added Measure

These are used to estimate or quantify how much of a positive (or negative) effect individual teachers have on student learning during the course of a given school year. To produce the estimates, value-added measures typically use sophisticated statistical algorithms and standardized-test results, combined with other information about students, to determine a “value-added score” for a teacher. School administrators may then use the score, usually in combination with classroom observations and other information about a teacher, to make decisions about tenure, compensation, or employment.


Resources:

Original and Revised Bloom's Taxonomy and Related Resources

Fresno State Institutional Learning Outcomes

Portland State Assessment Planning Webpage

Assessment at UC Merced

University of Hawaii at Manoa Assessment Planning Webpage

University of Central Florida Assessment Handbook

Colorado Mesa University Assessment Handbook

Bakersfield College Guide to Assessment Webpage

University of Wisconsin at Madison Undergraduate Program Assessment

University of Connecticut: Goals, Objectives and Outcomes Webpage

University of Iowa Assessment Methods Webpage

Kansas State Assessment Methods Webpage

University of Richmond Student Learning Outcomes Webpage

University of Connecticut: How to Write a Program Mission Statement Webpage

California State University, Fullerton Closing the Loop Webpage

Georgia State University Closing the Loop Webpage

Wayne State University Mission Statement Webpage

Biola University Mission Statement Webpage

University of Rhode Island Student Learning Outcomes Webpage

University of Connecticut Student Learning Outcomes Webpage

AAC&U Value Rubric Development Project

WASC Description of Core Competencies

WASC 2013 Accreditation Handbook