I built Birdy to help you optimize your Twitter profile with continual automated A/B testing!
Why? More Twitter followers. More website clicks.
I'm a huge fan of Twitter since I discovered the indie makers community. I wanted to create a product around this passion of mine, so I started experimenting with the Twitter API and I eventually landed on profile A/B testing as a cool problem to solve :).
I have a years old Twitter account with like 300 followers (many mutuals) but very low engagement on anything that isn't a reply to a higher-profile account.
Should I create a fresh account to reset how the algorithm sees me, or should I double down on this one that already has /some/ (not much) visibility?
Definitely double down on the one you already have. Starting from 0 is way more difficult than starting out with 300 followers.
Transform your profile and make it super interesting (work on your banner and bio, then have a good pinned tweet once you get a somewhat viral tweet). Then engages daily with other accounts - that's where new people will discover you.
> Should I create a fresh account to reset how the algorithm sees me
Don't overthink it, just think about why someone would follow you. If it's just a bunch of retweets about a large range of topics it's usually not very interesting to people. Unlike TikTok there's no algorithm that suddenly puts you in some recommendation box where you then get millions of views over night.
- Make sure you have a profile picture, a good description
- Clean out the people you follow to be relevant to what you care about these days
- Follow interesting people in your niche
- Reply to tweets if you have some interesting to say, engage with content you like.
After a while you'll see people over and over and if you like what they put out, you follow them.
Cool idea, looks very polished. I guess for this to work best, you need to be very consistent with putting out content to get a good baseline?
The Twitter maker community is exciting to be in right now. Lots of fun tools popping up (Saying this as a fellow bird-named tool creator: https://getbirdfeeder.com).
> I guess for this to work best, you need to be very consistent with putting out content to get a good baseline?
Exactly! You need to be active, because the stats are based on profile clicks, which are derived from your tweets.
> The Twitter maker community is exciting to be in right now. Lots of fun tools popping up (Saying this as a fellow bird-named tool creator: https://getbirdfeeder.com).
It's the best community I've ever been a part of :O Birdfeeder looks nice :D
What a great, simple idea. Way to follow through and actually make it happen!
Random ideas:
* Pricing: you could tie pricing with the performance (pay $1 per extra followers that your best version is getting - cap at $X0).
* Marketing: use the twitter api to find accounts that need you the most. Probably businesses who tweet a lot with little engagement. You could target newly funded startups, maybe look at who's posting on product hunt etc.
> Pricing: you could tie pricing with the performance (pay $1 per extra followers that your best version is getting - cap at $X0).
That's interesting, I never thought about this pricing model haha. I'll see if this makes sense. For example, you might get some followers from people that have not clicked on your profile (name hover or suggestions) (and there is no way of detecting this).
> Marketing: use the twitter api to find accounts that need you the most. Probably businesses who tweet a lot with little engagement. You could target newly funded startups, maybe look at who's posting on product hunt etc.
Good idea. I'm also thinking about reaching out to social media agencies. They might be interesting in a tool that will help their clients grow on Twitter :)
Do this with the Apple App Store! “App Store Optimization” is a huge business, and something simple like this (with suggestions on what you should try - perhaps a premium feature?)
I have about 15 users. Because they want to optimize their profile and perfect the funnel that converts profile visitors into followers :D Or test out which profile converts to more clicks on their bio link.
Oh! That's a good idea. I didn't think about that.
I think at some point I'll have a free plan for everyone (with limited features). I just need to finish implementing all the profile components and then enable only some of them for the free plan.
If I’m understanding correctly, this switches your profile on some interval between A and B, whereas a proper A/B test will randomly bucket a user to experiment A or B.
Not that it matters - this solution is probably the right way to go without building something into Twitter itself - but the more data/stats oriented folks may be confused or irked by calling this “A/B testing”
That is correct - it switched your profile version at a regular interval.
Indeed, that's the only way I could do it. The Twitter API has its limitations :D
I didn't know there was a specific definition of A/B testing. I'll see if I get more complaints about the terms I use ^^.
To me, that's still A/B testing - that is I'm testing a version A and a version B and then report on which one does better. I guess the way I'm doing it is different :D
Definitely agree that this is a good solution given the limitations, I think the only downside to this approach is that there might be some effect based on time of day or the interval affecting your results. That said I think the chances of that are super low and A/B testing is only so accurate anyway. Great idea and nice website!
> I think the only downside to this approach is that there might be some effect based on time of day or the interval affecting your results.
That is definitely true. I'm about to start alternating the versions every 5m to mitigate this. The closer I can get to 0m, the more accurate the results are. This way even if you get a followers spike (let's say you get a viral tweet), the followers will be properly distributed between each version.
Yep - we've run many switchback tests so I'm happy to chat more about it. It's a lot more akin to what you're building here, from a stats point of view.
I feel like that's less of a problem because the user is more likely to convert on the version that he likes more, which would still provide accurate data.
But yes, there is no perfect solution with the limitations of the API :D
That's cool man apologize for what is for you to apologize, and nothing else. Nothing else. And do your thing in the semblance of your vision of your ideas, all the way and show the world. And the world won't like it.
You gotta be sure of what you know. Be sure and be wrong, better than always being unsure.
Why is bucketing users the better approach? To me it seems like bucketing would ignore "all other factors" that also changed, whereas dynamic (or periodic) switching seems like it would normalize those "other factors" across both A/B (ideally).
I suppose that the issue is that a single user could see both versions, which could skew the data.
I'm not sure I totally understand why, because if a user follows you after seeing the other profile version, it might be because he preferred this other version.
But conceptually it makes sense to eliminate as many variables as you can to isolate the components of the test.
Thank you! Yes, custom built with Tailwind. First I wanted to use a landing page template, but then I decided it would be easier to just build it myself hehe
This is a brilliant idea. Feel like it should work since this is already done for YouTube. The same mechanism is used to A/B test thumbnails and automatically choose one to be the winning candidate.
There is no “testing” going on here, unfortunately. You’re just taking turns placing users in the treatment or control group, arbitrarily, depending on when they visit the page. How can you measure any treatment effect when everyone is part of either group?
I guess you could try something like switchback testing, but I’m not convinced visits to the average Twitter profile will yield enough samples.
I think it’s a well-executed idea, but I don’t think it’s fair to sell results under the guise of statistical validity when they don’t appear to have that. (although it’s just Twitter profiles, not eg medical treatment, so no real harm done)
> There is no “testing” going on here, unfortunately. You’re just taking turns placing users in the treatment or control group, arbitrarily, depending on when they visit the page. How can you measure any treatment effect when everyone is part of either group?
This is not a perfect solution, but I have found that it is good enough to be able to identify clear winners (if there is actually a winning version). A lot of followers won't visit your profile multiple times anyway. They visit it, and they either follow it or they don't - and they will of course be influenced by the currently displayed version :). They won't just come back to your profile over and over again for no reason, but if they do, they will also convert better on the version they prefer. So a version might nudge the user into following you while another one might not. I would disagree that this is not testing. I think it is for the majority of the profile clicks you receive.
> I guess you could try something like switchback testing, but I’m not convinced visits to the average Twitter profile will yield enough samples.
I don't think that would be possible with the current Twitter API capabilities anyway.
> I think it’s a well-executed idea, but I don’t think it’s fair to sell results under the guise of statistical validity when they don’t appear to have that. (although it’s just Twitter profiles, not eg medical treatment, so no real harm done)
While the results might not be perfectly accurate, I think they are accurate enough to provide value, especially if you let the test run long enough to get a big sample size. I personally use Birdy (obviously :D) and I have noticed much better conversion, which is why I'm confident.
I'm looking forward to seeing new capabilities appear on the Twitter API to always make the process more accurate though.
Don't want to rain on your parade but this is in no way a scientific A/B test. If I understand this correctly, Birdy periodically switches profile information b/n A and B.
* How do you make sure the same user who was on A variant yesterday doesn't get the B variant today?
If the same user gets both variants over different periods of time, how can you say treatment group engaged more with your content. People shuffle in and out of the treatment group periodically. Thoughts?
> How do you make sure the same user who was on A variant yesterday doesn't get the B variant today?
You can't. You make the point that the stats generated by Birdy are not 100% accurate, but they're the best we've got given the limitations of the Twitter API :D
I make the point that the data is good enough, and statistically significant enough (especially if you let the test run enough) to help you find the better profile version.
I agree that this subset of visitors that may come twice or more can skew the data.
Birdy's stats are not to be taken as the absolute truth, but as solid clues as to which profile version it is alternating between is providing the best results.
I'm looking forward to improving Birdy as Twitter adds more capabilities to their API!
Why? More Twitter followers. More website clicks.
I'm a huge fan of Twitter since I discovered the indie makers community. I wanted to create a product around this passion of mine, so I started experimenting with the Twitter API and I eventually landed on profile A/B testing as a cool problem to solve :).
reply