This month at VMware Explore Europe in Barcelona, we hosted an Executive Briefing session for VCPP partners. It included multiple breakout sessions, and I was invited to present the one titled “Exploiting your VCPP Bundles to the fullest with incremental revenue streams”. In essence, my breakout session covered how to build additional, monetizable services using the VCPP Flex-Core Bundle and Add-Ons.
During the session, one of the attendees asked an interesting question, that comes up in many conversations with partners: “How do I arrive at a reasonable pricing model for per-GB-VRAM features and products in VCPP?”. This is what I am going to answer in this blog post.
Understanding VCPP Pricing
When you look into the Product Usage Guide, located in Partner Connect, you will find that the majority of products and solutions are metered and priced based on the amount of VRAM consumed by the VM that leverages a given set of features. For example, NSX-T DC Professional adds x Points to the Flex Core Bundle charge, while the NSX-T DC Advanced Edition adds y Points to it. The more features and products a given VM uses, the more expensive it becomes for the provider – in direct relation to the GB of virtual RAM reserved or allocated to that particular VM.
This model provides a nicely aligned basis for calculating the full cost of any given workload and is fully Pay-as-you-go and scalable for the provider. On the sales and pricing side, however, the per-GB-VRAM metric can cause a challenge. It is fairly uncommon in the cloud market to charge for features, like network capabilities in NSX or monitoring capabilities in vRealize, based on the amount of VRAM a VM has. Customers would be reluctant to pay a different price for their distributed firewall or OS monitoring between two VMs, only because they have different sizes of memory. There is simply no relatable technical connection between the feature and the different price points.
Aligning on a per VM charge
The obvious answer to the above question is therefore: Charge per feature set that any given VM uses, irrespective of the amount of VRAM the VM has. This usually raises some eyebrows with the audience. Why? Because this proposal disconnects the cost driver (VRAM) from the revenue driver (No. of VMs). And that can make calculations complicated and increase risk on the desired margins.
However, with the right data and some simple financial engineering, VMware Cloud Providers can mitigate this risk and ensure the margins they want. Both while selling features and products on a compelling, per-VM basis.
It’s all about the Math
Here is how it’s done: We first of all need a solid understanding of the average VRAM size and distribution of VMs that a single customer or the sum of all customers are running on the cloud platform that we want to calculate pricing for. Whether the analysis is done for one customer only or across all customers depends on whether the provider has a dedicated pricelist per customer or a single pricing model across all customers. Overall, the larger the set of VMs we look at, the better to minimize risk.
As an easy example, lets assume the list of VMs looks as follows:
Since the cost driver for the provider in the VCPP model is GB of VRAM in relation to the points per GB VMware charges per set of features, we need to understand the incremental number of points. You can refer to the Product Usage Guide to calculate the number of points based on the features and products your customers need. Let’s assume the provider wants to price and sell a set of features that adds 5 points per GB of VRAM to the Flex-Core price. This gives us the following:
|VM||vRAM (GB)||Added Points|
And based on the VCPP points price, the provider can now calculate the incremental cost for any given VM size. We assume the list price of one USD per VCPP point here. So far so easy. Now comes the important part. We want to find a price towards the customer that fulfils the following requirements:
- Be applicable across any given GB VRAM size of a VM
- Reduce Risk
- Preserve Margin
- Be competitive
To achieve this balance, we first need to calculate the average cost added for the feature set. In this case, it’s 42 USD:
|VM||vRAM (GB)||Added Points||Added Costs|
|Average Cost:||42 USD|
With this information, the provider can add a markup as percentage of the total average costs for the add-on feature set to determine an incremental price per VM. That price is independent of the VMs VRAM size, which is the first requirements we had.
The absolute margin is, however, different for VMs of different sizes, and may even be negative on a per VM basis. In this example, VM-5 would generate a negative margin based on these assumptions. To reduce risk and preserve desires positive margin, the provider can now calculate based on different markups for the feature set and determine the optimal, absolute margin that further ensures competitiveness. In this example we used 25 percent markup:
|VM||vRAM (GB)||Added Points||Added Costs||Price||Margin|
|VM-1||4||20||20 USD||52,5 USD||32,5 USD|
|VM-2||4||20||20 USD||52,5 USD||32,5 USD|
|VM-3||2||10||10 USD||52,5 USD||42,5 USD|
|VM-4||8||40||40 USD||52,5 USD||12,5 USD|
|VM-5||24||120||120 USD||52,5 USD||-67,5 USD|
|Average Cost:||42 USD||Total Margin:||52,5 USD|
It’s important to observe that, while some absolute margins for larger VMs, like VM-5 in this example, are negative, the total overall margin always remains positive due to recovery from smaller instances. In this case, a set of features or capabilities delivered by products that are charged at an additional 5 VCPP Points, would be sold at 52,50 USD per VM. This equals a total of 262,50 USD of incremental revenue with a total of 52,50 USD or 20 percent overall margin.
If more granularity and additional risk-mitigation is required, partners can segment the projected sizes of VMs and apply different prices based on the weighted average for per-VM features in these segments. This segmentation could typically be done based exclusively on VRAM size, which leads us back into the original direction of a link between technically disconnected features to sell and the different price points. Therefore, is must be used with caution, i.e. only in few segments.
A similar approach is to model VM classes and price these VM classes and their add-ons according to their use-case. This is frequently seen in hyperscale pricing models and can be done in VCD using Compute Policies, too. With this, VMware Cloud Providers can build, for example, memory-intensive VM classes and t-shirt sizes, that come with a different per-VM add-on price compared to general-purpose VM classes.
As a final option, Providers can include the additional features in the base VM price for a class, for example a high-security VM class, that includes additional networking, security and monitoring capabilities in the per-VM base price without Add-Ons.
Additional Considerations and Planning
With the above example, we were able to show how to calculate a per-VM price from a per-GB-VRAM cost driver. The logic presented therein does not change whether the calculation is done for 5, 50 or 5,000 VMs. Yet there are a couple of additional considerations for real-world scenarios.
First of all, incremental charge within VCPP is capped at a certain amount of chargeable GB of VRAM. Every VM that is larger than that cap, must be treated as if it has the capped maximum of GB VRAM in the calculation. If that’s not done, the provider is at risk of being less competitive and overpricing.
The bigger issue that comes up in conversations about this approach, is the static nature of the model. We looked at a snapshot of VMs and their VRAM sizes at a given point in time. This approach contradicts the scalable and flexible nature of using Cloud resources, where VMs get spun up, scaled or deleted as demands change. To counter this effect and its potentially negative impact on margin, partners should calculate based on different scenarios and assumptions about the development of the environment. As the environment grows or the calculation is done across a larger set of VMs, outliers in either direction will have less impact on the margin.
Besides this basic financial engineering, partners can implement contractual safety nets that allow them to adjust pricing in accordance with the average size of workloads or other changes to the environment, which is common practice in cloud environments.
In some cases, it can make sense to have certain features included in an increased base VM price without breaking them out into separate, per-VM SKUs. This is for example the case when a feature is usually used by every VM in the environment, like IDS in NSX-T DC Advanced. The same model may be applied for features and functionalities that are not detected on a per VM-basis by Usage Meter. Examples include IPv6 dynamic routing, EVPN and VRF, which are detected per Tier-0 Router. Or L2VPN, which is detected on a per-Segment basis. In this case, partners could still implement a more granular charging model, but need to pay attention to the inherit risk of disconnecting the cost driver from the revenue driver. For this reason and to create a predictable pricing model for customers, an increased base charge for all VMs may be the better choice compared to granular per-VM pricing.
Partners should consult the Usage Meter Detection guide, available in Partner Connect, to understand the exact metering mechanism and derive the appropriate charging model.
If you’d like to get started with calculating the business opportunity behind these additional value added services, VMware provides Cloud Provider opportunity calculators for Flex-Core and value-added services.
And as always, please do not hesitate to reach out to your account teams and as for support with building your business case and monetization strategy.