tl;dr conducting a field experiment has taught me many things. Learning through failures and testing as often as possible are only some of them.
There is light at the end of the tunnel, guys! In the past few months, I was pretty busy with my studies and my final thesis project. The topic is a recent trend in product design and behavioral economics, called digital nudging. In the first half of 2019, I've also done a scientific literature review as part of my studies. Right from the start, the topic was fascinating for me. Sometime after the literature review, I've got the possibility to write my master's thesis about it as well. So fill the research gap, I've decided to study the effect (positive as well as negative) of nudging consumers on digital environments in the domain of insurance products. Usually, insurance decisions are very complex and hard to make. That's why nudging might be a promising policy to facilitate decision making. The question is... is this always a good idea? Or does it maybe even harm the user experience? What is the right amount of information you need? And how do you facilitate decision making without manipulating the user? All in all, a lot of interesting questions that I want to answer.
One of the most efficient ways to show the effect of something that no research has done before is to perform an experiment. I've decided on an online field experiment, which usually provides the most promising results. In a field experiment setting, the research is done in an open field and running "production." Since the participants do not know that they are part of an experiment, the design can be a bit tricky error-prone. Especially when you don't have a lot of experience in performing empirical tests.
For me, this was the first real experiment in a significant scope I've designed and performed by myself. And during the process, I've learned a ton of things about the scientific methods. The first thing I've done was also one of the most difficult. Thinking about how the experiment should actually look? How to nudge? What decision? What to measure? What is the expected outcome? To gather some information, I've read a ton of papers with similar experiment setups and talked to people in the scientific community. One thing that has helped me a lot was also really early feedback from friends. Feedback on initial mockups, thoughts of the experiment, and even the ideas I've wanted to show. The direction of the experiment has changed nearly a dozen times. And honestly, sometimes I've also had the question in my head if this is still the correct experiment and thing to do. But suddenly, It has made a click, and everything makes sense. Lesson learned... don't freak out and expect to get everything right from the start. Be patient and collect feedback as early as possible!
Okay, first step, preparation check. What's next? Starting the experiment, of course! Well, no. The idea was to do the nudging on a self-developed webpage that simulates the site of a new insurance company. After doing the complete branding, design, and development of this application, I've also added the connection to analytics, so that user behavior could be tracked and analyzed. To test that everything is working, I've sent over the link to some people that have clicked through the site. After fixing some smaller technical errors, I've wondered if this is the right setup to get the expected outcome for the analysis part of the experiment. That's why I have started a pre-study. The goal of this pre-study was to perform the research in a smaller scope. So I could test the technical setup as well as the outcomes. The best way to just get some people involved in this experiment is Amazon MTurk. This is a platform that lets you design tasks for human workers. And that in a very detailed way. Perfect for gathering initial data! Around 200 people tested the website and filled out a post-survey. And it has been totally worth it! Afterward, I could fix several things on the site, the technical setup, as well as the experiment design. Lesson learned... make pre-tests to gather initial data and make sure everything is working smoothly for the final experiment!
With the new insights from the pre-study, I've made several adaptations to the experiment setup and data tracking. The final obstacle was the question of how getting people to visit the website and being part of the experiment. After some research, Facebook ADs have been the most promising thing. With a small amount of money and some time, I've set up some advertisements. In the end, over 10.000 people have seen the AD, and around 400 have visited the website. I've collected a lot of promising data that is currently in the analysis part. Besides analyzing the data, I'm writing everything down for the thesis paper. But somehow, I've wanted to share my learning additionally on this blog.
All in all, doing my first experiment has been a huge and scary task. A very challenging one, too. But also one that has taught me many things I can use in other parts of my life. Research early and a lot, but also gather feedback as soon as possible. Don't freak out when things are changing a lot! Test as often as possible, even in a broader scope. And make adaptations based on the test results to get the best possible outcome.
Keep Creating ✌️