Tuesday 7 April 2015

Survey results

Responses to survey questions (abbreviated):

1. "The content of the illustrations…helped me to understand the subject matter on each page:
  •  32 respondents agreed
  • 1 reponded was unsure
2.  Open ended question

3. "The amount of illustration on each page is:"
  • 32 respondents said enough to help understand the subject matter
  • 1 respondent said not enough to help understand the subject matter
4. "The amount of text on each page is:"

  • 32 respondents said enough to help understand the subject matter
  • 1 respondent said not enough to help understand the subject matter
5. "The style of illustration works well with the typeface used:"
  • 30 respondents agreed
  • 3 respondents were unsure
6.  Open ended question

7.  "The combination of image-text helps me to understand the subject matter of each page:"
  • 32 respondents agreed
  • 1 respondent was unsure
8. "I find that reading the text on screen is:"
  • 32 respondents said easy
  • 1 respondent said quite difficult
9.  Open ended question

10.  Open ended question


My survey results are in!  In total I got 33 respondents, which is a decent enough number to get data from.  The survey was posted alongside the prototype on my Facebook page, and shared by some of my friends who were able to garner a slightly larger audience.   Mostly, I got the results I had hoped for, affirming that the relationship between the image and the text was evident and that the text was legible.  A notable result, however, was that  a chunk of people felt that the text left them needing to know more.  Unfortunately I didn't think ahead and ask for examples of which pages needed elaboration, so I was not able to find out the specific pages this could be improved on.  Whoops!

One of the other issues with the questionnaire which only occurred to me later was that the prototype had 60 viewable screens, so my methods of controlling what the respondents viewed was not good as they all might have gone different routes and stayed in the app for differing amounts of time.  There is no way to know whether everyone spent 5 minutes or 50 seconds exploring the app, which again was an oversight of mine.  That being said, most people gave answers which indicated that they had had a proper look through the build. 

I also got a lot of informal feedback from Facebook friends who took the survey, some of whom viewed the prototype on an iPad and were able to report back on the user experience in general.  I didn't have an iPad at the time of building the prototype, as I had to be at home for a week, so getting others' feedback was very valuable.   General comments were that the breadcrumb navigation system was very unclear and did not feel intuitive to navigate, so I will need to go back and review how the app can be navigated not using that.  

One respondent suggested implementing iPad gestures.  The good news is that I have discovered InvisionApp actually do swipe gestures, which opens up a possible range of interactions that iPad and tablet users will intuitively know already.   This may remove the need for some buttons, like the back button.

People also pointed out that the Home screen confused them and it felt more like a "Start" screen, so that will be changed.  One person mentioned that they expected the blue text to be a hyperlink, while others appeared not to notice and said it helped highlight the important information for them.  I will try out other colours and see if I can find another which works in the colour scheme, or alternatively underline any text that does link to another screen to make it apparent that it is a URL (since they are commonly underlined).  

I have previously mentioned that I am unsure whether or not to keep the back button in the interface.  One person mentioned they expected the back button to take them back to the root of the subsection, which is something I had not thought of and may be a use for it instead of a browser-style back button which simply takes them to the previous page.

The comments in the survey were also very insightful into the content of the illustrations.  Most people agreed that they were easy to understand, however some pointed out that some of the images, for example the head split open to reveal an exclamation point, did not connote mindfulness to them.  One useful piece of feedback I got was that the head of the character was much larger than the body, which was only fitting for the brain and mind-related sections of the app.  It's a very good point I never thought of before - if the illustrations demonstrate a body scan, for example, it doesn't make sense to emphasise the head instead! 

All of the feedback will inform the next bit of iteration for the app.  I hope to test it on an iPad soon after I have implemented gestures and changed the necessary parts of the user interface.




No comments:

Post a Comment