RESULTS
Analysis of the survey data included information about observed software errors, demographics, and exit interview responses.
Screen Errors:
Three types of errors were recorded: text, image, and click-and-hold. The error most frequently observed was the click-and-hold feature. Of the 30 subjects observed, 59% of the total errors were click-and-hold errors, followed by 37% for click-on-image error and 4% for click-on-text errors. Overall, subjects made .496 errors per screen, approximately one error for every two screens visited.
Initial Interview:
Results from the demographics survey (Appendix 1) indicated six age groups were measured. The greatest numbers of subjects were between the 36-45 years old, comprising 30% of the total sample population (Appendix 2). All subjects indicated they read English fluently.
Eighty-seven percent of participants felt they were comfortable working with computers. Residency data showed 46.7% of the sample population were out-of-town residents and 40% were from the Cornell Community (Appendix 2). Fifty-seven percent of the population indicated they had visited the museum before, of this group, 87.5% visit occasionally and 12.5% visit frequently (Appendix 3). Subjects were asked to indicate how long they intended to spend in the museum during their visit. Forty-eight percent reported they would spend 30 minutes to an hour (Appendix 2). Subjects were asked why they were visiting the museum that day. Answers ranged from "to look around", "visiting new exhibition", "the view from 5th floor window", "children's program", and "tour". There was no significant difference between frequency of responses.
The results of the software preference survey are summarized in Appendix 4. Forty-three percent reported they would like to see a floor plan of the museum included in the computer program (Appendix 4). Thirty-three percent indicated they liked the software. An open-ended question collected additional concerns about the software. Forty-seven percent of the subjects reported that they did not like the click-and-hold (Appendix 5). Participants were asked to indicate which type of interface (mouse, keyboard, or touchscreen) they would prefer. Sixty percent reported preference for a mouse and 40% reported preference for a touchscreen.
In general, 55.2% of the subjects felt that the computer program had sparked their interest in the museum. When asked if the program had encouraged them to explore more of the building than they had intended to, subjects indicated an even amount of agreement (33.3%) between "yes", "no", and "don't know".
Exit Interview:
Results from the exit interview (Appendix 6) from 22 subjects are summarized in the following section. Note that attrition occurred during the second part of the interview, eight of the original subjects did not respond to the exit interview. The format of this survey asked that subjects indicate a "yes" or "no" response to a variety of questions concerning their visit to the museum. When participants were asked to report if any computer image sparked their interest enough to try and find that piece of artwork or exhibit, 77% responded that it had not. From the 23% that indicated that it had, 63% were not able to locate the artwork. Seventy-three percent of the participants reported that they did recognize some images from the computer while they toured the museum. From those responses, 40% of the participants indicated that they would like to see other pieces of art included in the computer program. They reported that they would like to see artworks such as; woodcarvings, Tiffany glass, Frank Lloyd Wright, Turner prints, Homer prints, more from the Asian collection including the ceramics, and works from the current exhibitions included in the program.
Subjects were then asked to report about orientation within the museum. Twenty-three percent of the visitors felt the need to ask or did ask someone for directions. When indicating if they felt it would be useful to have a computer with this program on every floor of the museum, 59.1% responded that they thought it would be useful. Visitors were also asked to report if the computer helped them find their way around the museum. Eighty-one percent felt that it did not help them, and the most frequent reason they gave was because there was no map included in the program. Other reasons noted were that they were on a tour, they were a frequent visitor, they just wandered, or they would have liked more images on the computer. Forty-five percent of the subjects indicated that they would have liked a printout of any of the computer screens. Twenty-five percent of those respondents said they would like a printout of the text, and 12.5% felt they would like a printout of "any pages", "recent exhibitions", "American art", or "more detailed information about the paintings".
Ninety-five percent of respondents indicated that they did not see anything in the museum that appeared differently than it did on the computer. After visiting the museum, 60% of the subjects felt that the content of the computer software should be changed. Eighteen percent of those subjects indicated that they would like more images and more information about the artwork. Another nine percent of the participants expressed a desire for the software to be more interactive, that it include 3-D images, a floor plan, more directions, or clearer instructions about click-and-hold.
Forty-five percent of the visitors reported that they would not use the computer again if revisiting the Johnson Museum of Art. Other reasons indicated were: they were already familiar with the museum, the program was not useful, only if there was new information, or only if there was information about the recent exhibitions.
Subjects were then asked to report what they liked best about the museum. Twenty-four percent of the visitors indicated that they most liked the variety of artwork, following that response, 18% reported they liked the recent exhibitions, and 12% indicated that they liked the view. Other responses included: the permanent collection, the architecture, the Rembrandt works, and how well the works were displayed.
Responses to what was least liked about the museum indicated that 35.7% felt that signage was the most significant issue. Twenty-one percent reported that wayfinding was difficult in the museum. Other responses included a dislike for the Modern art, the second floor- American art, glare in the galleries, the Asian art, and that the windows should be cleaned. Forty-seven percent of the respondents felt that the signage was good in the museum, 26.7% felt that it was not helpful, and 20% felt that the layout of the museum was obvious.
Not one of the subjects interviewed had visited the museum web site before.
Groups Analysis:
Statistical analysis was performed in order to determine the relationships amongst the groups of interest. Two groups in particular were examined for significance: age groups and location of residency. Since all subjects indicated that they read English fluently and 87% of subjects were familiar with using the computer, it was not possible to examine groups with these attributes.
The relationship between the average number of errors per person and the age group of subjects was performed (Appendix 8). A graph of these variables indicated that those participants in the age group, under 16, had the most errors. This result may have been greatly influenced by error since the sample size was so small for this age group (n=2), therefore a comparison of the average number of errors per screen versus age group was thought to give a more accurate picture of the data. It was determined that subjects in the age groups of 46-60 years and over 60 years had the greatest number of errors per screen (Appendix 9).
A significance test between total number of errors per person and location of residence (out-of-town vs. Cornell community) was performed. The Cornell community was defined as subjects residing in Ithaca including members of the Cornell community. A t-test, assuming unequal variance at alpha 0.05, determined the mean number of errors was not significantly different for out-of-town residents (mean=6.46) and those of Cornell community (mean=5.4).
Analysis of the total transaction time spent on the software program was conducted for out-of-town and Ithaca and Cornell community. A t-test, assuming unequal variance at alpha 0.05, determined the mean total transaction time was not significantly different for out-of-town residents (mean=175.71 seconds) and those of Cornell community (mean=149.08 seconds).
The average transaction time spent on the program and the age group of subjects was examined. The bar graph of these two variables suggests no apparent difference between the mean times of all groups (Appendix 10). All means fell within a 95% confidence interval.
The total transaction time per person and their total time spent in the museum were examined. A Pearson-R correlation indicated a weak relationship (r=0.4).