Have You Outgrown Your WYSIWYG Experimentation Tool? featured image

The answer might surprise you.

In the beginning, there was JavaScript injected into the DOM for A/B testing on the web…

Well that’s not quite right, is it? A/B testing has a long and sordid history. Going back almost 20 years, many very successful companies leaned heavily into A/B testing and experimentation, making it part of their operational DNA.[1] Companies like Booking.com and Microsoft leveraged experimentation through homegrown tools and heavy amounts of hands-on data analysis to build brands, grow their revenue, and pivot based on market conditions. Jeff Bezos famously said, “Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day.”[2]

Mainstream adoption of A/B testing, however, can be linked very directly to the advent of WYSIWYG-driven, browser-based A/B testing tools that were easily leveraged by marketing teams across various industries and verticals. This makes sense for a lot of reasons. Surface-level changes like copy and imagery are really compelling targets for rapid iteration. Data-driven user experience experimentation can be really powerful for reducing friction and achieving user-centric goals.

As with any good tool, temptation begins to creep in with testing platforms.[3] The temptation is to use testing tools as solutions for every problem. But this temptation should be avoided. Every tool has their place and real value, but testing platforms are not magic wands for all your experimentation needs.     

Building sand castles

This is what I like to call the sand castle approach to A/B testing. You aren’t building a castle for your users. Those have solid foundations, meant to stand the test of time. You are building a sand castle, meant to be washed away over time.

It looks like a castle, but it isn't. Your website UX changes look like real features but aren’t. They aren’t even built in the way you will need to construct them if they prove worthy of being made real.

Peer review, security, and safe development practices

Experiment code is written following existing development practices; not limited in complexity. Development practices and DevOps have matured extensively over the last few years. For example, all code is typically peer-reviewed to meet organizational security and performance standards. Lots of time and effort go into this kind of work. So why would we ignore this to complete rapid development for certain areas of our web presence? This is exactly what happens when visual editor tools are used for experimentation. This code is inserted directly into the DOM, but also circumvents all of the tools in place for safe development practices. No peer code review, no security scans, no repository for versioning of this code.

Imagine an experiment that inserts a new or distinctly different lead generation web form without the level of peer code review that would normally take place in validate. There's certainly a possibility fields could be sent to the wrong places, or even formatted improperly, but other scenarios that create security issues such as accidentally opening up an injection attack or Server-Side Request Forgery (SSRF).

Lead time to release

In order to create treatments, JavaScript is used to manipulate the page as it exists. In other words, WYSIWYG tools create special JavaScript snippets that are inserted on a web page as it is loading. When portions of the page have loaded that need to be changed, say a hero image and its text, these special snippets of code attempt to quickly change the page.

This can lead to “page flicker” as areas of the page are swapped out with the experimental version, as well as overall performance issues with slower page load times. We’ve probably all experienced this as we browse the web, with content of a webpage suddenly flashing and changing.

This also means a lot of “interesting” programming techniques that are unlike anything you would do to create a production component similar to the treatment you’ll be experimenting with.  

In the world of rapid iteration and experimentation, many of these changes are not only time-sensitive, but represent a great deal of potential revenue. A winning iteration of an experiment will need to be rewritten. If you begin to quantify the amount of time between the completion of an experiment and getting it into the production code base and released, you will likely see a lot of potential lost revenue, and an ever-growing amount of lead time between the final versions of winning treatments as more and more experiments run their course in tandem.

Longtime readers of the LaunchDarkly Blog know we’ve been a consistent voice in talking about the value of feature management as it becomes more and more accepted at an organization. The disconnect between experimentation managed outside of the development release process, as is the case with WYSIWYG experimentation tools, is a prime example of a sub-optimal situation. In this case you might even be able to put a dollar value on that example, as the days tick by between a winning experiment and the day its deployed.     

Getting personal

If you really want to nail personalization, you need more than what happens in the browser. Personalization is a long-term goal for experimentation programs and for good reason. When you nail down the best approach for customers with personalization, you are leveraging what you know about them, but also what you know works for customers like them. 

To really know a customer like that, you’ll be crossing channels outside of web browsing. Mobile usage, smart devices, even in-store activities are the keys to the personalization castle. (Want to know more about personalization across channels? Take a look at our article about syncing web and mobile experiences with feature flags.)

In closing

Visual editors certainly played a huge role in popularizing experimentation on the web, but in reality, they hold a limited space in the field experimentation as a broader practice. Measuring the outcomes of treatments and how customers react to them requires a huge amount of flexibility and precision. And while visual editors are not going anywhere anytime soon, we at LaunchDarkly are here to clarify when they are truly useful and when experimentation is really a part of feature management for an organization.

Citations

[1] https://www.linkedin.com/pulse/how-ab-testing-helps-microsoft-why-you-should-consider-rhys-kilian/

[2] https://observer.com/2017/11/forget-10000-hours-edison-bezos-zuckerberg-follow-the-10000-experiment-rule/

[3] https://en.wikipedia.org/wiki/Law_of_the_instrument

Like what you read?
Get a demo
Related Content

More about Industry Insights

August 16, 2023