Item Analysis provides statistical data on overall test performance and individual questions. It can be used to help instructors determine question difficulty and look for ways to improve. For example, instructors can see if most students had difficulty with a particular question.

- Go to the test you wish to have analyzed.
- Click the menu button to the right of the test name and select Item Analysis.

- Click Run next to the selected test.

- Once the analysis finishes running, a link will appear under Available Analysis. Click the link to view the analysis.

The top section of the analysis is an overall summary of student performance.

*Possible Points*: The total number of points for the test.*Possible Questions*: The total number of questions in the test.*In Progress Attempts*: The number of students currently taking the test who haven't submitted it yet.*Completed Attempts*: The number of submitted tests.*Average Score*: Scores denoted with an asterisk indicate that some attempts aren't graded and that the average score might change after all attempts are graded. The score shown is the average score reported for the test in the Grade Center.*Average Time*: The average completion time for all submitted attempts.*Discrimination*: Shows the number of questions that fall into the Good (greater than 0.3), Fair (between 0.1 and 0.3), and Poor (less than 0.1) categories. A discrimination value is listed as Cannot Calculate when the question's difficulty is 100% or when all students receive the same score on a question. Questions with discrimination values in the Good and Fair categories are better at differentiating between students with higher and lower levels of knowledge. Questions in the Poor category are recommended for review.*Difficulty*: Shows the number of questions that fall into the Easy (greater than 80%), Medium (between 30% and 80%) and Hard (less than 30%) categories. Difficulty is the percentage of students who answered the question correctly. Questions in the Easy or Hard categories are recommended for review and are indicated with a red circle.

The second section of the report provides details on individual questions.

*Difficulty*: The percentage of students who answered the question correctly. The difficulty percentage is listed along with its category: Easy (greater than 80%), Medium (30% to 80%), and Hard (less than 30%). Difficulty values can range from 0% to 100%, with a high percentage indicating that the question was easy. Questions in the easy or hard categories are flagged for review.*Graded Attempts*: Number of question attempts where grading is complete. Higher numbers of graded attempt produce more reliable calculated statistics.*Average Score*: Scores denoted with an asterisk indicate that some attempts aren't graded and that the average score might change after all attempts are graded. The score that appears is the average score reported for the test in the Grade Center.*Standard Deviation*: Measure of how far the scores deviate from the average score. If the scores are tightly grouped, with most of the values close to the average, the standard deviation is small. If the data set is widely dispersed, with values far from the average, the standard deviation is larger.*Standard Error*: An estimate of the amount of variability in a student's score due to chance. The smaller the standard error of measurement, the more accurate the measurement provided by the test question.

For additional information, including a short video, visit Blackboard's help guide at: https://help.blackboard.com/Learn/Instructor/Tests_Pools_Surveys/Item_Analysis