A better approach to pricing

A better approach to pricing

A better approach to pricing

We've always questioned the fairness of the standard per-user pricing model. Struct brings together communities and teams on a single platform, necessitating a unique pricing strategy that addresses their varied needs.
We've always questioned the fairness of the standard per-user pricing model. Struct brings together communities and teams on a single platform, necessitating a unique pricing strategy that addresses their varied needs.
We've always questioned the fairness of the standard per-user pricing model. Struct brings together communities and teams on a single platform, necessitating a unique pricing strategy that addresses their varied needs.

Let's talk about pricing. The industry standard model of charging per user or per agent never quite sat well with us. It's time to shake things up. At Struct, we're bringing two different use cases - communities and teams - under one platform. This unique blend demands an innovative pricing model that can accommodate the varied needs of these groups.

Our original thought was to build pricing around active threads. But, we realized over running hundreds of orgs on Struct since our Knowledge Base launch in June is that the main cost to running a chat platform like Struct is GPT. Serving chats, threads, files and emojis isn't all that expensive, GPT is the real cost.

Struct uses GPT for these tasks:

  1. Generating title and summary for each thread

  2. Questions asked to Struct Bot

  3. (In development) Generate weekly newsletter to send out to members

GPT is the most variable in our costs to run the platform, and also at the same time, something that can be controlled. For example, if a team doesn't use Struct Bot to ask questions, then their GPT usage would be lower compared to another team who's actively asking questions.

Let's talk about pricing. The industry standard model of charging per user or per agent never quite sat well with us. It's time to shake things up. At Struct, we're bringing two different use cases - communities and teams - under one platform. This unique blend demands an innovative pricing model that can accommodate the varied needs of these groups.

Our original thought was to build pricing around active threads. But, we realized over running hundreds of orgs on Struct since our Knowledge Base launch in June is that the main cost to running a chat platform like Struct is GPT. Serving chats, threads, files and emojis isn't all that expensive, GPT is the real cost.

Struct uses GPT for these tasks:

  1. Generating title and summary for each thread

  2. Questions asked to Struct Bot

  3. (In development) Generate weekly newsletter to send out to members

GPT is the most variable in our costs to run the platform, and also at the same time, something that can be controlled. For example, if a team doesn't use Struct Bot to ask questions, then their GPT usage would be lower compared to another team who's actively asking questions.

Let's talk about pricing. The industry standard model of charging per user or per agent never quite sat well with us. It's time to shake things up. At Struct, we're bringing two different use cases - communities and teams - under one platform. This unique blend demands an innovative pricing model that can accommodate the varied needs of these groups.

Our original thought was to build pricing around active threads. But, we realized over running hundreds of orgs on Struct since our Knowledge Base launch in June is that the main cost to running a chat platform like Struct is GPT. Serving chats, threads, files and emojis isn't all that expensive, GPT is the real cost.

Struct uses GPT for these tasks:

  1. Generating title and summary for each thread

  2. Questions asked to Struct Bot

  3. (In development) Generate weekly newsletter to send out to members

GPT is the most variable in our costs to run the platform, and also at the same time, something that can be controlled. For example, if a team doesn't use Struct Bot to ask questions, then their GPT usage would be lower compared to another team who's actively asking questions.

$29.95 per month, with 500K tokens included + a $30 per million tokens overage rate

$29.95 per month, with 500K tokens included + a $30 per million tokens overage rate

$29.95 per month, with 500K tokens included + a $30 per million tokens overage rate

So, it only seemed right that we charge by the GPT usage, in particular, by the tokens used. To keep things simple, we treat both prompt and completion tokens the same. Our pricing model would be $29.95 per month, with 500K tokens included in the base price. Beyond that, we'd charge $30 per million tokens used. You'd only be charged what you use in overage scenarios.

This pricing model works well for teams of size 5 to communities of size 100,000. In fact, we've seen that large communities tend to produce less messages and require less GPT work than teams. This model generates reasonable, predictable pricing across the entire spectrum.

We haven't enforced any monetization on Struct so far. The focus has been on launching the chat platform. Before we introduce this pricing model, we'd ensure that our users have ways to tweak their GPT usage.

For example, we'd allow our users to set a limit on how much they can be charged in the month, to avoid any surprises. Beyond the limit, Struct would still continue to operate sending messages, serving threads, pretty much like every other chat platform that exists today.

So, it only seemed right that we charge by the GPT usage, in particular, by the tokens used. To keep things simple, we treat both prompt and completion tokens the same. Our pricing model would be $29.95 per month, with 500K tokens included in the base price. Beyond that, we'd charge $30 per million tokens used. You'd only be charged what you use in overage scenarios.

This pricing model works well for teams of size 5 to communities of size 100,000. In fact, we've seen that large communities tend to produce less messages and require less GPT work than teams. This model generates reasonable, predictable pricing across the entire spectrum.

We haven't enforced any monetization on Struct so far. The focus has been on launching the chat platform. Before we introduce this pricing model, we'd ensure that our users have ways to tweak their GPT usage.

For example, we'd allow our users to set a limit on how much they can be charged in the month, to avoid any surprises. Beyond the limit, Struct would still continue to operate sending messages, serving threads, pretty much like every other chat platform that exists today.

So, it only seemed right that we charge by the GPT usage, in particular, by the tokens used. To keep things simple, we treat both prompt and completion tokens the same. Our pricing model would be $29.95 per month, with 500K tokens included in the base price. Beyond that, we'd charge $30 per million tokens used. You'd only be charged what you use in overage scenarios.

This pricing model works well for teams of size 5 to communities of size 100,000. In fact, we've seen that large communities tend to produce less messages and require less GPT work than teams. This model generates reasonable, predictable pricing across the entire spectrum.

We haven't enforced any monetization on Struct so far. The focus has been on launching the chat platform. Before we introduce this pricing model, we'd ensure that our users have ways to tweak their GPT usage.

For example, we'd allow our users to set a limit on how much they can be charged in the month, to avoid any surprises. Beyond the limit, Struct would still continue to operate sending messages, serving threads, pretty much like every other chat platform that exists today.

The only chat platform built for open source communities

Blog

Pricing

About

FAQ

hey@struct.ai

The only chat platform built for open source communities

The only chat platform built for open source communities