-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proof of Concept Firebase A/B Testing Implementation #9679
Comments
SummaryThe mobile app has everything in place for running A/B testing in Firebase. To start a new experiment or see existing experiments, you can visit the A/B Tests tab in the Remote Config section of the Firebase console. Other than adding the appropriate feature toggle to remote config, updating the code to use that feature toggle, and creating the experiment in Firebase, there's nothing else needed in order to run an A/B experiment. Testing implementationI created 2 experiments for this spike, 1 for iOS and 1 for Android, since each OS needs its own experiment. Experiments are focused around audiences and variants that are determined by feature toggle values. For the sake of this spike, I used the I had 2 variants for these experiments, 1 being the baseline (the
Potential issuesOne issue I ran into when testing was the remote config override being set in the app, which prevents the app from fetching feature toggle values from Firebase, and just uses the local remote config values. This seems to be caused by the waygate logic setting the override value to Metric trackingAnother useful thing about Firebase A/B testing is you can choose the "primary goal" of the experiment, and it can track the results of that. For instance, I chose Retention of 1 day as the goal for testing, but you can choose from a wide list of stuff. You can also set one of the custom analytic events as the primary goal, and it will track metrics around that event. In terms of general metrics, you can see how many users are participating in the experiment and the percentage of users that receive each variant. Practical use caseA good example of implementing A/B testing would be if you wanted to test a new flow for completing a task, e.g. refilling a prescription, and you couldn't decide which flow would work better for users. You could create an A/B test with two variants, and add a custom analytic event, e.g. Common questions
Yes, once a user is assigned to a A/B variant, they will remain in that variant, so it'll be persistent across sessions.
No, unfortunately there's not, so you'll need to duplicate the experiment for both Android and iOS. This is because in Firebase, technically the iOS app and Android app are two different apps, and an experiment can only be applied to one app. |
Moved this to code review while it's being reviewed by @ajsarkar28 and @dumathane |
After reviewing my finding above, Ameet suggested we run a trial A/A test in production, meaning a test where nothing has changed but confirms that traffic is flowing appropriately. For the A/A test, we decided to use I created an experiment for both iOS and Android. After about a week of running the experiment, the iOS experiment has 57,000 total users and the Android experiment had 21,000 users, which accurately correlates to the distribution of iOS/Android user who use the app. Users across each variant were spread out evenly, at 50% receiving the |
This work is complete now that we have a clear understanding of how to implement A/B experiments, and what metrics we can get from them |
Objective: Determine the technical feasibility of integrating Firebase A/B Testing with the app using Remote Config. The focus is to understand the setup process, necessary changes to the app’s architecture, and any potential constraints.
AC
The text was updated successfully, but these errors were encountered: