DALL-E/Every illustration.

Freeform: A New Experiment From Every Studio

Reflect on 2024 with our AI-powered tool for creating smarter, more adaptive forms

69 11

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@Mark_5418 6 months ago

Powerful observation and an important insight about the ubiquity of forms. Possibility or problem? TBD!

Oshyan Greene 6 months ago

Interesting idea! I would love to hear what use-cases you have in mind because most of the examples you bring up quickly raise problems/concerns in my mind, e.g. consistency of data gathered when the questions are all different, biasing effects, etc, etc. It's obviously not an approach that's appropriate when gathering comparable data is the goal (which seems like is *often* the case). I know there are other scenarios where forms are needed and often used, but I'm not totally seeing the *value* here, much as the concept intrigues me.

The experience of testing it just now was interesting and actually insightful with this self-reflection prompt. It feels a bit like a formalized and directed AI chatbot experience, which is not a bad thing. Btw I got stuck at the "Save your answers?" part; nothing happened when I told it to Save. After Refreshing it then prompted me correctly for my name and email and I was able to save it. However "saving" just emailed me a link to my answers *on your site*, which is definitely not what I hoped for. I really think the answers should be in the body of the email, or at the least an attachment (text/.md). Just in case that was supposed to happen I clicked the link and tried again to send it to myself, this time it got stuck after I put in my name and email, not showing any confirmation or moving to the next screen. I pressed the button twice so got 2 emails. Refreshing again made it progress to the Summary/Archetype screen. I'm on Windows 10, Google Chrome, using uBlock Origin.

Cassius Kiani 6 months ago

@Oshyan in my mind, it depends on your goal.

When you're doing user research, or learning about problems (from health to wealth) it seems like it’s quite hard to ask the right standardised questions to get the right informaiton. 

Even when you want comparable data, it’s not obvious to me that means that the questions (or even answers) need to be standardised at responder level el.g. the person who answers. I would agree that at some stage standardisation helps, however when this happens is an open question in my mind. 

My hunch is that there’s valuable signal which is lost in static, standardised questions. At least, that’s what I saw in healthcare.

Cognitive Task Analysis (CTA) from Gary Klein is another reasonable example here. In CTA, you ask unstructured questions to learn how experts make decisions. CTA has a structure e.g. the practitioners know what success looks like, however you need to ask the right questions, then double-click on the right spots in the answers to get any value. Standardised questions and answers here would seem to lead to noise, rather than signal.

My hunch is that in a few years, models will bet better at asking tailored questions than any static form (which is why I’m exploring Freeform).

Thanks for the feedback on this Freeform. Static links were a conscious decision, because it was faster for me to ship this. Also, that’s a strange issue re: getting stuck. I’ll try to resolve.

Oshyan Greene 6 months ago

@cassius Thanks, interesting examples! Survey design is such a challenging topic even without variability between what is prompting each respondent, but it's true that absolute consistency or comparability is not always the goal. Another interesting angle to consider is that survey design might in part be so difficult because the same question (the same way it is posed) can elicit not only different *responses*, but even different *understandings* in different people. From that simple idea one can perhaps see the seed of a proof that surveys as a means of gathering consistent data is inherently limited. Getting from that to confidence that AI can do a better job of achieving comparability *via* customization per-respondent is another stretch, but not out of the question. I can imagine analysis of millions of survey question:answer pairs with demographic or even individualized (but anonymous) data on respondents that might help an AI optimize for consistency in question interpretation by all survey respondents, for instance. A simple example of that would be customizing the sophistication of the language for the reading level of the respondent, but it could get much more sophisticated than that. All very theoretical, but intriguing to ponder!

Cassius Kiani 6 months ago

@Oshyan thanks for these thoughts, there's lots for me to muse on here. I appreciate this.

John Wetzel 6 months ago

What does this end up looking like on the "Results" & Analysis side of the form?

Do the questions follow any pattern or guidance or they just question 1 -> question 10 and each question + response are completely unique?

Cassius Kiani 6 months ago

@John Wetzel you can get the model to assign how many questions to ask, and you can even adjust this down the line too (based on responses) in a semi-reliable manner. In this version, I hardcoded the question number so I could ship this a little faster (and reduce the total surface area too). All questions are unique based on answers, all I need to do is seed the model with a single question hardcoded question, and then the rest is LLM magic.

John Wetzel 6 months ago

@cassius makes a lot of sense.

I'm also curious about how you're imagining it would work as an Admin viewing all responses. In a traditional form tool, it's a table with a column for each question and rows for responses.

Cassius Kiani 6 months ago

@John Wetzel that's a good question and it's one I'm trying to answer rn

Paul Carney 6 months ago

Fascinating experiment. I was wondering when it was going to end, so an indicator of how many more questions would be good. I see this used for problem-solving or negotiations where it can help walk through dynamic questions based on previous answers. This could also be used as a job applicant interview engine (I built a set of prompts for similar activities) for both the employer and job applicant.

Cassius Kiani 6 months ago

@paul.carney thanks Paul, did you see the progress bar at the top? If you didn't, it sounds like there's an issue I'll need to fix. What device and browser were you using?

Also, I've mused on job applications as well. If you have any fun ideas here, I'd love to hear them.