2002 Summary

April 1, 2002

College of Science and Mathematics Assessment Committee

  • Stephen A. Leslie, Committee Chair, Department of Earth Sciences
  • Cathy Christie, Department of Nursing
  • Larry Coleman, Department of Physics and Astronomy
  • Jim Fulmer, Department of Mathematics and Statistics
  • Marian Douglas, Department of Chemistry
  • Tom Lynch, Department of Biology
  • Thea Spatz, Department of Health Sciences

The College of Science and Mathematics used a 0 to 4-point scale to rate this year’s program assessment reports. This scale corresponds to the Exceptional (4), Reasonable (3), Limited (2), and Inadequate (1) criteria distributed by the Provost’s Office. A score of 0 indicates that the area of the report was not present. The following table is a summary of the scores for each program.


Program
Use
Faculty & Stakeholder

Approach

Overall
         
B.S.,
Biology
2.5-3.0
Reasonable
2.5-3.0
Reasonable
3.0
Reasonable

2.5-3.0
Reasonable
M.S.,
Biology*




B.S.,
& B.A. Chemistry
3.8-4.0
Exceptional
3.6-3.8
Exceptional
3.8-4.0
Exceptional

3.8-3.9
Exceptional
M.S.
& M.A., Chemistry
2.1-2.3
Limited
2.0-2.2
Limited
2.0-2.2
Limited

2.1-2.3
Limited
B.S.,
Environmental Health Sciences
2.5-3.0
Reasonable
2.5-3.0
Reasonable
2.5-3.0
Reasonable

2.5-3.0
Reasonable
B.S.,
Geology
3.2-3.4
Reasonable
2.8-3.0
Reasonable
3.4-3.6
Exceptional

3.1-3.3
Reasonable
B.S.,
Health Sciences
2.6-3.0
Reasonable
3.0-3.4
Reasonable
3.2-3.5
Reasonable

3.2
Reasonable
M.S.,
Integrated Science and Mathematics
1.0
Inadequate
1.0
Inadequate
2.5
Reasonable

1.5
Limited
B.S.
& B.A., Mathematics
3.0-3.5
Reasonable
2.5-3.0
Reasonable
3.5-4.0
Exceptional

3.0-3.5
Reasonable
M.S.,
Applied Mathematics
2.5-3.0
Reasonable
2.5-3.0
Reasonable
2.5-3.0
Reasonable

2.5-3.0
Reasonable
A.S.,
Nursing
3.5-4.0
Exceptional
3.5-4.0
Exceptional
3.5-4.0
Exceptional

3.5-4.0
Exceptional
B.S.
& B.A., Physics
2.8-3.0
Reasonable
2.5-2.7
Reasonable
2.8-3.0
Reasonable

2.7-2.9
Reasonable
         
Totals: Inadequate:
1

Limited: 1
Reasonable: 7
Exceptional: 2

Inadequate:
1
Limited: 1
Reasonable: 7

Exceptional: 2

Inadequate:
0
Limited: 1
Reasonable: 6
Exceptional: 4

Inadequate:
0

Limited: 2
Reasonable: 7
Exceptional: 2

*No report was generated for the MS in Biology.

All the undergraduate programs in the College of Science and Mathematics are rated at reasonable level, or above, in program assessment based on the PAAG criteria rubric. In general the undergraduate programs are doing a good job with assessment, although there is variable across the college and room for improvement. It is apparent that there is a culture of assessment that is now imbedded the undergraduate programs in the college. The step between reasonable and exceptional is a difficult step to make. Two programs have made that step, the B.S./B.A. in Chemistry and the A.S. in Nursing.

The graduate programs have the most room for improvement in their assessment activities. There are two newer graduate programs (Biology and Integrated Science) and two established programs (Applied Mathematics and Chemistry) in the college. The Applied Mathematics program is the only gradate program in the college that has a reasonable assessment plan. Both the Chemistry and the MSISM program have limited assessment activities in place. The Biology program did not submit a report this year.

Strengths:

  • There is considerably more assessment data being collected. It is obvious that more implementation has occurred and this has resulted in more useful assessment data.
  • It is clear that all departments are taking assessment seriously. The quality of the reports reflects considerable effort.
  • Most programs have good learning objectives that are linked to the goals of their programs. In addition, most programs have methods to assess the learning objectives.
  • There is evidence that the "assessment feedback loop" is closed in more programs this year than in previous years.
  • There has been improvement in program assessment in the programs that comprise CSAM over the past few years. As a result of continued emphasis on assessment, the college is developing faculty expertise in the area of program assessment in the programs that are scoring well in the review process.
  • A benefit of participation in the assessment process is that each program has had to examine and evaluate student learning in respect to learning objectives. Programs have discovered areas in which they are doing a good job as well as areas in which they need improvement. Faculty have become involved in assessment and have a more comprehensive idea of the program and its goals as a whole rather than the narrower view of individual courses.
  • Data are being collected from a wider range of sources, covering more diverse areas of assessment.
  • Validity and reliability are being measured by some programs

Areas of Concern:

  • Although there is considerable faculty and student stakeholder involvement, there needs to be more external stakeholder involvement in some programs.
  • It continues to be apparent that the MS programs are struggling with the assessment process.
  • Departments are collecting information from stakeholders, but are we sharing the results of assessment with all our stakeholders (students, alumni, and employers)?
  • How can the Assessment Committee determine the reliability and validity of the program evaluations?
  • Not sufficient attention is given to the question of whether the numbers are large enough to allow statistically valid conclusions. People are making changes and justifying them with statistically suspect data.

Recommendations and Comments:

  • There is the risk that programs and people external to the assessment process may look at a score as a measure of worth or worthlessness. A high score does not indicate a good academic program and a poor score do not mean that the academic program is bad.
  • We recommend that the university continue to look in-depth at the assessment of graduate programs. These programs continue to pose special problems.
  • Now that most programs are collecting data, coming to terms with the strengths and weaknesses of the data collected is expected at this stage in the assessment process.
  • We recommend that the progress report form and the criteria for evaluation not be changed again. It was very helpful that there were only slight modifications from the previous year. We believe that this not only helped the reviewers, but also was quite helpful for the programs.
  • We recommend that a reward system be developed for programs that are doing exceptional assessment. Possible reward:
    • Evaluations that receive a rating of 3 or better be moved to a 2-year written cycle. This does not mean that they do not need to assess the next year. The program’s next report would simply contain the assessment data of a 2-year cycle. The reward is less report writing, not doing less assessment.
  • We suggest that the college think about a college-wide assessment that is applicable to all programs. Specifically, we recommend exploring the use of a critical thinking exam college-wide in capstone courses. All programs have critical thinking embedded in their program goals in some way. This may provide a useful way of of doing assessment at both the program and at the college level.
  • The process of evaluating program assessment reports needs to be examined. Examination should include issues of reliability and validity as well as methods of providing feedback to individual programs. Perhaps feedback could be given as comments and/or suggestions with numerical scores or ranges of scores eliminated.
  • An external assessment expert would be beneficial, especially if this is a yearly event. The Assessment Committee itself would benefit from consulting a professional in this area.
  • The poster sessions were helpful this year. Discussion with department representatives at the session provided an opportunity for the committee to get a wider, more inclusive grasp of departments’ reports.
  • A committee for graduate program assessment that relate to common.
  • A formula is developed for the distribution of "assessment monies" that include a constant amount across programs and also includes numbers of majors served within the programs. Programs should decide on their use of these monies, thus eliminating the time spent by the assessment committee concerning these plans.
  • Use Excellent instead of Exceptional as the descriptor for the "highest level" assessment score.
  • The assessment funds are not being made available in the appropriate time frame. They are made available in October and then must all be spent by end of the fiscal year.
  • Faculty has remarked that the process requires an investment of time and labor which has been at the expense of student contact.
  • It is not clear what is meant by program "building". Is this for growing the size of an existing program or creating a new one?
  • Mentoring could assist programs.
  • Assessment is an evolving process and programs should view their plans as works in progress no matter what their programs ratings. There is room for improvement in all programs, and we encourage all programs to continue to examine all aspects of their assessment plans.