Announcing a new pricing and the ability to buy server capacity

Hi guys,

I’m currently in the legacy personal plan.

Is the upgrade to the new pricing model an “all or nothing” move or can I upgrade a couple of the apps I have in my legacy to the new plan?
In other words, I would be retaining the old plan and have a few apps in the new one.

Many thanks,

Alex

Yes, if you migrate all apps will follow the new system.

Sorry @emannuel, I’m still a bit unclear on what would happen.

In the Settings tab for a specific application (i.e. not in my account), it looks like I can migrate that application (let’s call it Application X) to the new plan. See the screen capture below:

If I were to select New Plan > Personal for Application X, which one of these two options would materialize?

Option A) only Application X would affected (this is what the screen screen capture above seems to indicate) and all the other applications in my account, which are all Legacy > Personal, would remain as such
Option B) all applications would be affected (this is what your answer seems to indicate)

And once an application is in the new Plan (irrespective of whether it made it there through Option A or Option B), I will be able to upgrade it or downgrade it will. Is that right?

The screenshot you’re showing is not a migration. It lets you switch between the old personal plan and the hobby plan, so that you can chose which of your apps should have a personal legacy plan (limited to 2).

If you want to use the new personal plan (for instance with free SSL) you’ll need to migrate by clicking on the migrate button.

Got it now. Sorry, brain was on weekend mode today. Many thanks!

Curious why we can’t scale on the personal plan too?

Amazon has some great auto-scaling options!

what is the email address for support on paid plan?
Thank you!

It’s support@bubble.is

1 Like

In the light of recent events I’m wondering more and more if this server capacity model is actually fair. While I have no reason to doubt the sincerity of the team, their dedication and passion for their product I feel this system has a flaw.

When I’m having issues that may or may not be caused by myself in my app, I tend to buy an additional server unit. I’m probably not the only one doing this so in a sense Bubble gets rewarded whenever there is an issue.

If it is caused solely by myself, I have myself to blame. Maybe I need to change some workflows or my user base increased etc. But if it is not clear what the cause is, or when the issue is on Bubble’s side (including AWS) and I am just not aware of this for whatever reason, it is a bit strange that they are actually earning money by having or causing issues…

Just a thought I wanted to share and see what other think…

1 Like

Hey Vincent, I took a look at your app’s capacity charts. If you’re comfortable with me posting a screenshot to the forum for the benefit of others, let me know, but basically, what I’m seeing is that in the last month, you’ve hit periods of time where you are over capacity on a pretty regular basis since Oct. 23, increasing in frequency over the last week.

The rule is actually extremely simple – if you are seeing any maxed out time at all, the issue is 99.5% of the time not with Bubble, it’s with your app … either because you are doing things that are performance intensive, or because you’ve had an increase in usage. Looking at your charts, this is definitely going on with your app.

We’ve also had several issues with Bubble system-wide in the last week, so I understand that it’s confusing to know what’s that and what’s capacity, but if you are seeing maxed out time on your capacity chart, you can definitely assume that you can improve the situation by either buying more capacity or by changing the design of your app to make it more efficient.

I understand you may have limited budget for buying capacity, and if you’re in that situation, you can use the usage charts to identify the parts of your app that are using the most capacity, and try to improve them. If you have identified specific workflows / searches that the charts are saying are capacity-intensive, but you don’t understand why, please ask either on the forum or to support and we’ll be happy to help you figure it out.

2 Likes

Thanks for this and feel free to share. I have a hard time identifying those bottlenecks and I have already spent the last months optimizing workflows.

From @kevin2 I learned that ‘system’ is not taking up any capacity so when I see mostly system taking up most resources, I’m confused on how things can still max out.

Thanks for permission for sharing. I want other users to see this in case they are in a similar situation for their apps. This is the 30 day charts:

The thing I want to emphasize is: this is a chart showing serious capacity problems! It might look like usage is not particularly high, but usage is an average. If you are at 100% usage for 20 seconds, then 0% for 40 seconds, you’ll only be at 33% on the usage chart, but your app will be running slowly for 20 seconds. That’s why we have the seperate chart showing time at maximum capacity – if you are seeing even small bumps on that chart, it means that your app is being slowed down by capacity limits.

Anyway, sorry you’ve had issues figuring out where your capacity is going. It sounds like you’re at a bit of a dead end? If so, I will take an in-depth look at your app. I can’t commit to the next day or two, but sometime in the next 7 days I should be able to.

2 Likes

Thanks, that will be much appreciated.

quick question still @josh. I’m currently running a report which is scheduling an api workflow on a list. It is passing parameters from a repeating group (consolidated data from multiple data types) to the api workflow that creates a new thing with 18 fields. The total number of things created is not more than 500.

Currently, noone is using the app and I have 5 additional server units. The app is completely maxing out now. Is this to be expected from an action like that? No matter what kind of report is generated, with 5 additional server units, this should never max out I think.

I mean what if hundreds of users are in the app and they are all creating things, even lists of things. This is totally not uncommon. Should we be looking at adding server units in the dozens?

Were we just spoiled in the previous plan that just limited on workflows? Is having a couple of additional units for a very basic app just the bare minimum? Is the default 2 units just for hobby users, testing purposes?

Obviously paying <$200 is still a bargain for what we are able to do with Bubble but maybe I just need to adjust my perspective here…

edit:

I have now 7 units and just ran a pretty basic API workflow on a list. It was a bulk edit of one field in the backend editor consisting of 424 items in total.

The API workflow makes a change to ‘Thing X’, ‘Field A’ where Search for Thing Y (Thing X, field B = Thing Y field B) first items ‘Field A’. So basically updating an empty field by doing an index and match for another common field.

So all in all a pretty basic operation in my eyes.

Unfortunately, this resulted again in the server maxing out completely. @josh @emmanuel How is this possible still? How many units does it take for the server to not max out? :frowning:

4 Likes

I have to say I have noticed the same. Whenever employing the API for a small task like yours it hits maximum capacity! I have not yet reached product release, but I can’t help but feel that this scaling metric needs some work!

It surprises me you are still maxing out after purchasing additional allocation. If the API is going to do this all the time, why bother exposing at all if you need a dedicated plan to enable consistent performance? Is it really using that much AWS resources?

1 Like

I’d imagine the pain is on the DB more so than the server, but I’ve noticed similar issues. It is a bit concerning, and it’d be good if we got some sense of how to perform or structure these operations without tanking our entire system. As an example, one of my apps involves creating a new Question and assigning an empty copy to every user in a specific organization. Even with only 20 users, it hits the capacity limit for a brief period. Feels like a basic operation, so either I’m doing something mega wrong, or there is room for improvement.

That being said, I don’t anticipate leaving Bubble anytime soon, just want to be able to help out however I can. Oh what I would give for the ability to host and manage it myself!

I don’t mind paying for additional server units but at this point I’m not sure they actually help with this issue. I’d much rather pay a higher monthly fee directly to the Bubble team than spending it on server units. Then again, I could have set this up like a moron, so there is that…
Luckily, I filed a bug report and the team is looking into this now, so curious to learn what is causing what.

I haven’t noticed a significant difference between 3 and 5 additional units.

Also, what do you mean by this:

Oh what I would give for the ability to host and manage it myself!

Going even further than dedicated?

@vincent56 if you gain any insights, it would be good to know what you may have done incorrectly if anything at all.

But agree, if its always going to be an issue until you pay for a dedicated plan, then just tell us.

Yeah, I’d like the ability to host the infrastructure myself. When dealing with enterprise and government clients, this has been a deal-breaker to using Bubble, which sucks, because they get all hot and bothered after a day-long workshop. I’d love the ability to stand up the system on their own infrastructure, it’d be significantly higher margin work for me.

Will update this thread again once I look at @vincent56’s app, but quickly on the overall philosophy:

  • Prior to implementing capacity metering, Bubble was routinely going down because one app would do something expensive like process hundreds of thousands of rows at once that would max out our entire system. This was happening on a pretty regular basis. So we were both under-charging and over-charging: we were undercharging apps that had a high ratio of burst usage to total workflow runs, and we were over-charging apps that had a lot of workflow runs but each run was pretty lightweight and they were spread out. Switching from total workflow runs to capacity metering wasn’t really an option: we had to do something here or otherwise we’d never get reasonable levels of uptime. We’d rather have a single app go down from too much usage than all Bubble apps go down.

  • We want to make sure that Bubble is affordable for people manipulating reasonable amounts of data. So when we see cases where someone isn’t doing anything that intensive, but it is still consuming a bunch of capacity, we investigate, because often there is something we can fix on our end: an unoptimized query, a Bubble code inefficiency, etc. (Or, sometimes the user has made a mistake and is accidentally processing a lot more data than they think they are). That’s why I want to take a look at vincent56’s app, to see if there’s something going on there.

  • There’s only so many apps we can deep dive on at once, since it can often take hours / days for us to do that kind of investigation. I’m taking on vincent56’s app because he’s been having issues for a while, and he’s already gone back and forth with our support team for a bit and they couldn’t find an obvious thing he was doing on his end that was generating his level of capacity usage. Often, doing a deep dive on one users’ app turns up improvements that improve the ratio of data process to capacity consumed for all Bubble users.

  • Running an API workflow on a list does eat a fair amount of capacity: each workflow run has some capacity overhead, and creating / modifying items in the database are expensive. Bubble is designed for thousands of users touching a couple items of data at a time… when you have single users modifying hundreds of items of data at a time, your capacity to user ratio is not going to be as good as an app where users only need to modify a handful of items. One easy way of dealing with this without buying a ton of capacity is to spread the API workflows out by increasing the interval between when you run them. That lets you trade off time it takes to run them all against max capacity consumed at any given point in time.

  • If you are increasing capacity and still seeing maxed out time, it doesn’t mean the capacity isn’t having an effect. Rather, it means you haven’t found the level yet at which you have enough for whatever operation you’re doing. If you’re running a series of operations that consume 5 units, and you go from 2 - > 4 units, you’ll still see maxed out time… if you then upgrade one more unit, you’ll stop seeing maxed out time.

4 Likes