Data Management

Experimentation is Paramount to Data-Driven Decision-Making

Market Trends

Experimentation is precise planning and design to ensure data erroneous conclusions are prevented

According to Harvard Business Review, more than half of Americans rely on their gut feelings to decide what to believe, even when they are confronted with evidence that speaks to the contrary. Honestly, so do I, but only when the choice is between chocolate and vanilla ice cream.

Although intuition is a supportive tool, it may not be appropriate to base all our decisions around mere perceptions. While one's instinct could provide a possible hunch to start one down a specific path, it is only through experimentation that one is able to actually test various solutions, validate, assess, and ultimately choose the right one to go with.

Have you ever wondered why OTT platforms have such a great streaming experience? You probably have noticed that the featured show on the OTT homepage seems to change whenever you log in. It is all a part of their strategy to test multiple hypotheses and concepts on their customers.

The basic idea of a hypothesis is that there is no pre-determined outcome. Organizations design an experiment with a control group and one or more experimental groups. While each of the experimental groups receives a different treatment, the control group receives the same experience as all other users not included in the test. As we delve deeper into this area, the following framework is a modest attempt to explain the holistic concept of experimental design and its various applications.

Let us say you are driving down from Chennai to Bangalore. All of a sudden, you notice that your car is making a squeaking noise while it is running. So, you stop the car, walk around your car to the back, and listen for where exactly the sound is coming from. You observe that this sound is coming from the engine, so you open the car bonnet and see that one of the parts is wiggling. You try to rectify this issue and you realize that when you hold this part in place, the squeaking stops, and when you let it go, the squeaking continues. You repeat this action and conclude that the wiggling part is indeed the cause of this squeaking noise. You tighten this part, and the squeaking stops.

Let's try to describe what just happened in experimentation terms. Firstly, you observed the squeaking sound, and then you described it by pinpointing its precise location. Next, you hypothesized that the wiggling part could be related to this sound. You tested your hypothesis by holding down this part and observing if the sound stopped. You repeated the test and compared non-wiggling and wiggling conditions. In data analytics parlance, these conditions could be referred to as Treatment and Control settings. Finally, you deduced that the wiggling part was triggering the noise. You materialized the inference by tightening the part to stop it from wiggling.

Experimentation is intrinsically about precise planning and design to ensure that appropriate data is studied, and erroneous conclusions are prevented. In quantitative terms, the experimentation results ought to be statistically significant. Statistical significance indicates that a result or metric evaluated from the test is not likely to occur by chance. Instead, it is ascribed to a specific reason.

When we run an experiment or analyze its data, it is usually based on a sample because it is difficult and costly to gather data from the entire population. The sample is then used to make inferences about the population. Statistical significance helps determine whether the result is due to some factor of interest or not. The idea is to ensure that we feel confident about these findings. The insights should be real, and it should not be that we were merely fortunate in choosing a favorable sample.

The disparity in an underlying population plays a pivotal role in understanding whether or not a randomly selected sample would look vastly different from the total population. In other words, population variability does have an increased possibility of sampling errors being caused. The effect of variability within a particular population could be reduced by increasing the sample size to make it more representative. With larger sample sizes, we are less likely to get results that reflect randomness. Think about tossing a coin 10 times as opposed to tossing it 1,000 times. The more times we toss, the less likely we would end up with a great majority of one particular result.

Let us look at an illustrative business example of experimentation. A product manager has to convince the senior management to launch a new product line of denim jackets at department stores. The objective of this launch is to increase sales, grow the company's floor presence, and broaden the company's offerings. The manager wants to prove that this line would be beneficial before the company could pitch this idea to the stores.

So, the product manager conducts experimental research to provide a strong case for this hypothesis. He/ she implements a test at a few stores in which the new line of denim jackets is being sold. These stores are situated at different locations, to test the target market sales before and after the launch. The test runs for two months to determine if the hypothesis could be proved or disproved. Ultimately, the new line of denim jackets is launched at all of the stores because the results are favorable based on an estimate derived from a representative sample. The store sales are likely to increase by 5% with the introduction of the new line of jackets.

However, it should be noted that even if the result is not statistically significant, it may have used to the organization. Alternatively, when we are working with huge data sets, it is possible to obtain findings that are statistically significant but practically irrelevant. For instance, in the previous example, if we had discovered that store sales would increase by 0.001% with the introduction of new jackets, then it would not have been as relevant for the business. So, rather than obsessing over whether our findings are precisely right or not, we could think about the implication of each finding for the decision we want to make.

Despite not being a perfect solution, experiments have a random allocation process that removes any prior biases between the experimental groups and provides the capability of like-for-like comparisons. Experimentation could have wide-ranging uses for an organization in seeking to find solutions for product development and building a data-driven culture.

Firms competing in high-tech environments routinely test variables such as page design, offerings, and services. More broadly, an experimental mindset has permeated much of the technology sector and is now spreading beyond it as well. However, when we think about the benefits of experimentation, what often gets overlooked are the benefits to a company's culture and its employee morale. By instilling a culture of experimentation, an organization can empower its employees to make a difference through their work. Using this approach, employees can experience the fruit of their labor that is backed by data as well.

Holistically, experimentation enables a business and its employees to evaluate multiple opportunities at the same time. It enables companies to test completely new strategies and gauge the reaction of their target customers, with minimal cost implications. Hence, experimentation is definitely a recommended tool for companies to enable more effective data-driven decision-making.

Author:

Yatin Budhiraja is a Director – of Analytics, Research, and Data, at Fidelity Investments India, which is a global capability center of Fidelity Investments and a key fintech company. To know more about the company and the work we do, visit our website.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Sui Price to Hit $5 Soon, Investors Also Buying LNEX and XRP After 45% Spike

Cardano (ADA) Price Prediction, Solana (SOL) & Lunex Network (LNEX) See Massive Inflow of Investors

Why XMR and AAVE Supporters Might Be Piling into the Lunex Crypto Presale

Guide to Using CoinMarketCap and Its Features

Missed Out On Neiro Rally? This Altcoin Displays Better Metrics, PEPE Holders Begin Switching