While the cloud can be useful for some things, it is not the ideal match for every application and can become costly with scale. Cloud costs can at least double the infrastructure bills of companies operating at scale, put pressure on margins and weigh heavily on market cap. 

Most people would agree that you won’t get a decent return on investment out of the cloud if you just lift and shift your current architecture. But there’s an argument that you can do well with the cloud if you migrate to microservices. Amazon Prime itself dumped its microservices architecture, decreasing its cloud infrastructure costs by a staggering 90%.

Clearly, cloud costs have become a significant problem. However, the challenge with the cloud extends beyond the cost – it’s also about performance, security and data management.

Businesses must consider all four dimensions as they work on cloud optimization and striking the right balance between what they do in the public cloud and what they handle on premises.


Everybody is talking about ChatGPT, and many companies are launching efforts in the larger realm of artificial intelligence. When companies use large language models, they may upload all that data to the cloud. Enterprise applications such as SAP have similar workload requirements.

If you put that data in the cloud, you will have to pay an egress cost whenever you bring your data down. That’s expensive, but you might think you must pay that price for performance.

But data doesn’t need to live in the public cloud. Instead, you can store it “near-cloud” and still enjoy near sub-millisecond access to the cloud resources that you need.

If you adopt the right near-cloud solution, you will also get guaranteed availability. That’s dramatically better than what you will get from the cloud. No cloud provider is going to provide you with a 100% data availability guarantee. Cloud providers’ guarantees are so small that they are almost negligible. The guarantees really exist as compensation for not meeting cloud SLAs.


Meta was recently hit with a $1.3 billion fine for moving European personal data to the U.S., which was in violation of the European Union’s General Data Protection Regulation (GDPR). Calling the move an “unjustified and unnecessary fine,” Meta blogged that it plans to appeal the ruling. The company added that it was “disappointed to have been singled out when using the same legal mechanism as thousands of other companies looking to provide services in Europe.”

This raises the question of whether other big, powerful companies think they also have the right to move people’s data around as they see fit. If that’s the case, whose data is safe?

Some organizations with compliance obligations are beginning to adopt “near-cloud” strategies that deliver nearly the same performance as cloud-native storage at a lower cost with the benefit of managing precisely where those data resources reside.

It’s up to you to decide how much ambiguity you can live with and what data policies to set. There may be certain types of data that should not leave the locked room inside your data center and other data that you are happy to make publicly available. Set policies for which data you want to keep on premises and which data you are comfortable storing in or near the cloud.

Also ensure that you have data discovery and visibility capabilities. This will enable you to understand what data is where so that you can effectively address compliance.


Making sure data is secured, accessible and in the optimal location is part of the nuts and bolts of managing data. But such efforts should really be defined as storage management.

Data management, meanwhile, is more around the nature of the data. What data do you have? What do the bits and bytes of that data mean? Policy and compliance happen at that level.

Yet data management and storage management often get blurred into one. What some people refer to as data management is just table stakes data hygiene for storage management. You need to go the next layer up and do data management.

Both storage management and data management are important. You must do both. But be sure you understand that what some people are calling data management is just basic hygiene; they are not touching the data at all. Make sure you have the capabilities to catalog data, do analytics on it and otherwise address data management at every stage of the data value chain.


Many organizations adopted a “cloud first” strategy with good intentions of optimizing their utilization of infrastructure resources. In some cases, careful planning and understanding all of the tools available delivered extremely beneficial results. However, many more organizations have rightly scrutinized the cost-benefit of their “cloud-first” or “cloud-only” decisions.

Data storage infrastructure has quietly emerged as one area for extreme cloud optimization. Egress costs (downloading the data you freely uploaded to the public cloud) and replication costs (moving copies of data to another location for redundancy and protection) have significant cost implications. For many organizations, it can be the single greatest cloud cost.

Now that the cloud – and those that use it – have matured, everybody has learned a lot more about what works and what doesn’t. So, we can all be more careful about what to put where.

In deciding what to do, remember that there is more than one dimension to cloud optimization. Look at things holistically – and consider cost, performance, security and data management.

We’re Waiting To Help You

Get in touch with us today and let’s start transforming your business from the ground up.