A little, probably incomplete, advisory
2013-07-03 14:53:23
Insert google analytics as usal. What we want as the final result, is the user flow visualization.
Therefore we track pages instead of events, because events, espacially user events, will only be analysed in tables.
So we make every event a virtual page. Like click the new button is the old event 'use''click''new', is now
code -> _gaq.push(['_trackPageview', '/use/click/new']);
and will be visualized in the user flow. Still it is avialble in the analytics tables.
We know the user had made an interaction (use) and he clicked something (click) and the performed action was new (new).
So here comes the A/B-test example.
Let's say you want to test different layouts.
To prepare the test you should isolate the styles you want to change in a main.css.
Now you create a.css and b.css and still provide the default main.css for the default user group.
If the normal path of an use case aka event is 'use/[eventname]',
build the paths 'a/[testdate]/[eventname]' and 'b/[testdate]/[eventname]'.
In the result there can be an A/B-test group and the default users of main.
Any user should be only in one A/B-test sub-group, just one.
Before you ship and start an A/B-test, document what you are going to test. Keep the records!
If you want to test functions instead of layouts, isolate the js-code and do the same like above.
Sometimes you want to test new functions that will come with layout changes. Prepare html, js and css and
hide the new functions for the default user group.
Setup you user groups. Try it by random. This is the best practice. If you have to, do it by hand.
To compare the results you should know the activity rates of your users in each group before you have started the test.
The should not be significant different.
1% might be okay, 5 to 10% is already an bet, anything worse is worst as possible. Ignore this! The results will be obsolete.
Now you are prepared to lauch the test.
The test results will show up in the anlytics tabels and more important visualized in the user flow.
If the activity rates afterwards are different - here is your result. That's what you want to know.
If you have invited people to A/B-test, via email for example - only meassure those, who followed the link.
Therefore we track pages instead of events, because events, espacially user events, will only be analysed in tables.
So we make every event a virtual page. Like click the new button is the old event 'use''click''new', is now
code -> _gaq.push(['_trackPageview', '/use/click/new']);
and will be visualized in the user flow. Still it is avialble in the analytics tables.
We know the user had made an interaction (use) and he clicked something (click) and the performed action was new (new).
So here comes the A/B-test example.
Let's say you want to test different layouts.
To prepare the test you should isolate the styles you want to change in a main.css.
Now you create a.css and b.css and still provide the default main.css for the default user group.
If the normal path of an use case aka event is 'use/[eventname]',
build the paths 'a/[testdate]/[eventname]' and 'b/[testdate]/[eventname]'.
In the result there can be an A/B-test group and the default users of main.
Any user should be only in one A/B-test sub-group, just one.
Before you ship and start an A/B-test, document what you are going to test. Keep the records!
If you want to test functions instead of layouts, isolate the js-code and do the same like above.
Sometimes you want to test new functions that will come with layout changes. Prepare html, js and css and
hide the new functions for the default user group.
Setup you user groups. Try it by random. This is the best practice. If you have to, do it by hand.
To compare the results you should know the activity rates of your users in each group before you have started the test.
The should not be significant different.
1% might be okay, 5 to 10% is already an bet, anything worse is worst as possible. Ignore this! The results will be obsolete.
Now you are prepared to lauch the test.
The test results will show up in the anlytics tabels and more important visualized in the user flow.
If the activity rates afterwards are different - here is your result. That's what you want to know.
If you have invited people to A/B-test, via email for example - only meassure those, who followed the link.