StorySurvey3: evaluation interviews, automated!
You start your evaluation full of enthusiasm and do your first face-to-face interviews. You learn a lot: how is the atmosphere in the office? Are staff reluctant to let you talk to project users?
But it's a big project, there are potentially hundreds of people you could talk to. There are a set of questions you need to cover. Perhaps you want to trace out people's view of the causal links between an intervention and some outcomes. Wouldn't it be great if you could clone yourself and send yourself into everyone's inbox?
With the latest iteration of our survey tool StorySurvey, you can do just that right now: Design an AI-driven interactive interview in any world language, share the link with your respondents and download the transcripts! Your respondents need an internet-connected device. It's free for reasonable use (up to 200 respondents).
What is StorySurvey like for respondents?
Here's an example survey. Try it!
Notice that the link has "?survey=msc-conference" at the end of it, to direct you to a specific survey. That is the kind of link you send to a respondent.
How do you use StorySurvey to design a survey?
If you want to experiment with StorySurvey and design your own surveys, go straight to https://causalmap.shinyapps.io/StorySurvey3/.
The way it works is you start from a script aka prompt. This is an instruction to the AI interviewer to tell it how to do the interview. It can be as simple or as complicated as you like, but basically it's just plain English (or French or any other world language).
We have prepared a few example prompts. At Causal Map we are most interested in interviews which encourage respondents to tell causal stories, even printing out the individual causal links. But you can design any kind of interview.
Your job is simply to copy and adapt any of the pre-prepared prompts, or create your own from scratch.
Then test your survey by sending the prompt to the automatic interviewer who will then start interviewing you.
Keep editing your prompt and testing again until you are satisfied.
Then, you can get a link to the finished survey and send it to others to answer.
Your survey link can be public or private. If it is public, the name you give your survey will be part of the link to the survey so it might be possible to guess it, but if you choose a private link, your survey URL will be very hard to guess.
Your prompt is always public, so that others can adapt it to build ever better evaluation interviews. This way it would be great to build a library of evaluation interview prompts.
How to get your results?
You can view and test out StorySurvey, and get interviewed, without even logging in. However to save and share a survey, you need to log in with a Google account or email address.
Right now, when you download your transcripts you can analyse them any way you want, but there is no "standard" way to do it. But soon, you will be able to visualise your results in Causal Map.
Contact us if you need help with creating or launching a survey or visualising the results: hello@causalmap.app.
Technical details
StorySurvey uses GPT-4.
If you've experimented with ChatGPT before, you might have noticed that the "temperature" is high, which is good for writing essays. At StorySurvey the 'temperature' is set to zero which means the conversation is more deterministic: good for social research.
Chat-based interviews like this are no substitute for face-to-face key informant interviews - but they can be used to reach a much larger number of additional respondents.
Obviously you can't reproduce a whole interview approach in a simple prompt! But it's interesting to try, and a simple prompt can still generate a useful survey.
This site is free and experimental and we at Causal Map Ltd make no promises about what will happen to it in the future. If you want help with a survey, contact us.
Privacy
StorySurvey data is stored in a SQL database at Amazon RDS, which uses industry-standard security. Data is transferred using https (http over TLS). The text of the interview passes through the OpenAI servers.
Data submitted through the OpenAI API is no longer used for service improvements (including model training) unless the organization opts in, which we have not. OpenAI deletes user data after 30 days. We recommend that respondents do not submit data which might identify themselves or others, and respondents have to accept this condition before proceeding with the survey.
Enjoy!