Our Technology Evangelist, Neil King spends most of his time helping our customers align with industry best practice. Neil took some time away from his desk to discuss what the future data economy looks like, and how Bechtle ensure our customers are taking advantage of their most valuable asset; their data!

So, my first question is, what do you mean by “the future looks different in the data economy”? 

My thought is that the world is changing, right? I mean, the technology world has gone through a period of change, our customers have now got IoT (Internet of Things), distributed systems, edge systems, all of which integrate seamlessly into IT, whether that be a data lake on premise or in the cloud.

If you think about it, smart IoT is everywhere, you have sensors in your fridge at home, or your car, sensors are everywhere. What are they doing? They’re creating data, and a lot of it! For example, a popular electric vehicle manufacturer may upload close to 1 Gb of data every day – per car; and that data is landing somewhere, right?

There is a business need to find value in that data to improve services and deliver next level customer engagement.

Also, not only do you have massive numbers of sensors in our every day lives, you’ve also got distributed edge IT out in locations such as oil rigs, mobile phone towers, manufacturing plants and retail stores, to name just a few.

The Retail industry is full of edge devices because they have Remote Office/Branch Office (ROBO) locations. Consider a supermarket chain, you might have stores spread throughout the country, each one with an edge device connected to the Corporate network. Those edge devices are getting fed data by a multitude of local devices and local sensors.

When a customer goes into a supermarket -say- they are doing things like self-scanning, and that is collecting huge amounts of data points. It’s not just for ease of shopping; the retailer is recording everything you’re doing. That data is being fed into the edge device in store, then uploaded to the companies’ data lake, from that they can work out shopping habits and build up a profile of not only what you shop for but how you shop, how long you spend in store, the route you take in store, where you pause, what items you scan and put back, how much you spend and when.

The retailer will then monetise that by sending you vouchers and offers that they know will appeal to someone with your spending habits. Generally, a member of the public will then return to that retailer with their money saving voucher instead of choosing to go to a competitor’s store.

That is just one -relatable- example of how everything's changing in the data economy, because data is the new gold!

 

How has the role of data evolved in shaping business strategies, and what’s driving this transformation?

Because companies don’t need to guess anymore!

Data can easily identify trends, daily, weekly, yearly, what ever the company needs to extrapolate from the data they are harvesting.

That’s how the strategy of our customers businesses is being transformed by this revolution.

 

So, how are advancements in server compute and storage enabling organisations to handle the explosion of data?

So, it’s basically, the fact that hardware has got bigger, better, faster and we get a lot more performance out of the hardware we have available to us today.

We now have GPUs (Graphics Processing Unit) where we used to have CPUs (Central Processing Unit). Just digging into that for a second. GPUs are not technically as powerful as CPUs, but they can run a lot of processes in parallel. This makes them significantly more powerful than the equivalent CPU with more memory. It’s a volume thing. If you can multitask better, you can get more done. This is why GPU’s have been at the forefront of Generative AI (Artificial Intelligence) technology.

From a server perspective, it’s the amount of processing power that we have now as a result in this technology shift. Next generation servers are now designed for AI and Machine Learning (ML), and because of the amount of data out there, these servers need to be able to process and manage a huge amount of data in very, very short periods of time.

To give you a real-world example, I was in an AI meeting at the beginning of the week. This was a Bechtle led call, and we were going through how we were working with one of our partners on a genome sequencing project. Working with our customer (a large healthcare provider), we have been able to cut down a genome sequencing process from 20 days down to 2 hours.

Using, obviously, enormous amounts of customer data and our partners server and storage stack, Bechtle were able to deliver huge time savings to our customer by enabling them to manipulate the data they have collected very quickly.

From a storage perspective, traditionally customers tend to rely on their existing, ageing, slow storage footprint. As a recent example, one of our customers asked for high-performance servers as part of their infrastructure estate refresh. However, they still wanted to utilize their existing storage array that is 7 years old.

What customers sometimes don't realize is that the performance of their infrastructure is only as good as its weakest link. This applies to storage as well as compute; if either is slow, it won't be able to meet demand.

 

With cloud, on-premise, and hybrid approaches evolving, how should IT leaders be thinking about infrastructure strategy?

Ok, lets stay with storage for a second. So, with data lakes right, it's exactly what it says. It's a massive lake of data where customers can apply machine learning (ML) to large language models (LLMs) and extract valuable insights from their data.

Now, a data lake relies on underlying storage hardware platforms for its foundation. Modern technological advancements and efficiency improvements, such as compression, compaction, and deduplication, are making data lakes more efficient. Because of these efficiencies, customers can reduce the size of their data lake and factor this into their infrastructure strategy.

Cloud connected is also very important. Sometimes you might need to burst out into the cloud. Let’s say a retail customer has got an on-premise solution but then ‘Black Friday’ is approaching. They might need to temporarily expand their on-premise infrastructure into the cloud to enable them to burst (and keep up with demand) for that period. Solutions like this are easy to turn on and off and manage from a single management framework.

We now live in an ‘adaptive cloud’ world. This is a slightly different approach to cloud computing. It’s the ability to manage multiple clouds, private clouds, edge devices and IOT with familiar cloud tools.

In the solutions we design for our customers, a single pane of glass manages all those pieces of the customers infrastructure in one place.

This solves the problem when you want to manage everything from the cloud, but you have geographically dispersed technology stacks, in other words, you're making the cloud ‘adapt’ to your needs and work for you (unified operations, centralised management, true scalability and flexibility), and across all public cloud hyperscalers.

One recent example is one of our customers who were heavy users of Azure and MS365. However, this wasn’t suitable for a data explosion they were experiencing. By enabling them to deploy an adaptive cloud model, they were able to use Google Cloud to host and manage a data lake, continue to use Azure for their office needs and still have centralised management of both clouds.

It’s simplicity and ease of use, but what you must do, the key, is really prepare and plan.

IT leaders should have a very good understanding of what the business strategy and the objectives of the business over the next two to three years. They should know what the objectives are and determine what they need to do to realise those objectives.

It’s no longer a case of “we have this budget what can we buy?”, it must be the other way around.

This allows our customers to “buy small” at the outset and then grow. Scale up and scale out - it’s a new way of thinking about IT. Gone are the days of siloed infrastructure.

 

Does it make it more cost effective over time?

Yes, as an IT manager you don't need to rely on the budget you've been given. It’s not “I’ve got this much to spend, what can I buy?” anymore, you must shift it around as there’s now a more cost-effective way of spending that budget.

Some vendors have a consumption model now which can be even more cost effective, allowing customers to scale out and giving you another way of thinking about how you execute on an infrastructure strategy.

 

What role does AI and automation play in optimising data infrastructure and compute performance?

AI plays a large role just because you can have an AI platform that’s sat there and monitoring your infrastructure. It can be doing resource allocation or be used to deliver more resources to Virtual Machines (VM), or spinning up additional Virtual Machines to cope with capacity. AI can help with fault resolution, such as pre-identifying failed discs. For example, it can automatically notify a hardware vendor and ship a new part for your storage array before you realise anything is actually broken.

Just drilling down into optimisation for a second, let's say, you have a VM and a very noisy neighbour. Example, a Microsoft SQL server, which can have a habit of grabbing a lot of memory at times. AI can automatically detect and predict this sort of behaviour over time and automatically move that VM to a different host to free up CPU and memory, thus reducing the impact to any other services running on that physical hardware, and all without intervention from the IT team.

 

How should IT teams be preparing for AI-driven workloads and the compute/storage demands they bring?

Let’s face it, not every IT leader has a solid business plan, IT can be very unpredictable at times. In these instances, we would ask them to think about the scalability of their infrastructure. The IT teams role is constantly changing. It’s a companies business units themselves that have the requirements. The business cares about its data, how fast the data can be manipulated and the subsequent value it brings to the business. However, the business overlooks the importance of the infrastructure that delivers these capabilities – the business just needs to know the infrastructure is performant enough to bring value to the data.

Take for example a DevOps engineer supporting a business. They want to know that they can spin up a VM in a few seconds (Kubernetes etc), they don't care about the underlying infrastructure or how it does it. In this example, they are driving the IT need.

To respond to this paradigm shift, IT teams need to be preparing their infrastructure to be ready for AI driven workloads, but they don't necessarily need to know what will be running/hosted on that infrastructure. The infrastructure just needs to be high performance, cost efficient and scalable.

If we look at how things ‘used’ to work. IT teams had a set of requirements from the business, they would then purchase to meet that requirement for the next three to five years, rack, stack and configure that infrastructure. This could take an exceptionally long time to provision and be very costly. Now, it could take minutes…

IT teams can now provide their internal customers with a service catalogue that a DevOps or Data Scientist can say 'I want this t-shirt size'. Then it’s just a simple matter of selecting it and the backend systems can provision that VM. More importantly, with adaptive cloud you can do this both on-premise and in the cloud, it makes no difference.

A customers business unit doesn’t want to wait anymore. They want their resource provisioned within minutes and have the ability to get rid of it and recoup those resources instantly when it’s no longer needed.

 

Does that make it much easier to manage an IT estate now?

I wouldn't say its simpler or easier to manage for IT managers, but it’s a different approach. It’s all about user experience and meeting expectations. It’s highly complex (AI workflows, automation, spinning up new servers) which is the new role of the IT team to maintain. It’s a different way of doing things, it’s still very complex but to the end user, the experience is much faster and simpler!

 

With the increasing volume of data, what are the biggest security and compliance risks organisations need to address?

There is a goldmine of data in customers environments, going back decades, and you must secure that data. As a result, there are number of customers that simply can't move their data to a public cloud for security reasons. Tools like anomaly detection engines and data integrity systems are helping but the landscape is changing at a 'non-human' rate and Multi-Factor Authentication (MFA) won't stop everything!

Role based access controls, MFA and tight policies need to be in place and work together. If anything, you must get the right policies in place before you even raise the Purchase Order on your new infrastructure.

Testing is also a crucial step. You must test the security principles you’ve applied to your infrastructure stack, and we would recommend getting experts in to do that for you, before you would even consider going live.

 

How can IT leaders ensure they balance performance and cost in a rapidly changing data economy?

This plays toward our 'start small' strategy. You need to do strategic planning and constantly assess your needs. This will help you get the right balance between the size of infrastructure you need for today’s workloads and the associated cost.

As I said, it’s no longer “we've got a budget, what can we buy”. If you've got your 5 year plan and you've got your building blocks, that is how you get that balance. Then, over time when you need another building block, you can purchase and just drop in.

That’s the beauty of the scale.

 

What are the biggest mistakes companies make when planning their server and storage infrastructure for the future?

Having a budget and just spending it all at once with little regard for the future.

Also, lack of planning. If you don't have a plan but have a budget, use your budget and deploy, you’ve created yourself a silo and you're back to square one.

 

If you had to give one key piece of advice to IT leaders looking to future-proof their compute and storage strategy, what would it be?

Look at the big picture and don't have tunnel vision about a certain request from the business, then you can loop back to what I discussed around about strategic planning, scaling.

You need to be thinking strategic rather than tactical.

Also, get Bechtle involved right at the start before you do anything. We’ve got the experience and depth to help you on your data journey.

Conclusion:

In today’s rapidly evolving data economy, businesses are facing both unprecedented opportunities and challenges. As Neil highlights, data has become the most valuable asset companies possess, fuelling innovation, enhancing customer engagement, and driving strategic decision-making. However, to truly unlock the power of data, organisations must rethink their IT infrastructure, embracing adaptive cloud solutions, scalable storage, and AI-driven automation to handle the massive influx of information.

Gone are the days of rigid, siloed infrastructure built solely around available budgets. The future lies in agile, strategic planning that focuses on flexibility, performance, and efficiency. By starting small, scaling as needed, and leveraging cutting-edge technologies, businesses can reduce costs, boost performance, and future-proof their data infrastructure.

Ultimately, success in the data economy isn’t just about having the right hardware—it’s about having the right strategy. And, as Neil emphasises, the key to getting it right is preparation, planning, and working with trusted experts who can help navigate this complex but exciting landscape.

At Bechtle, we’re here to guide you on your data journey, ensuring your infrastructure evolves to meet the demands of today—and tomorrow.

We provide managed services and thought leadership, building technology roadmaps with our customers. We promote our experience and drive customer value through the use of blogs, seminars and our hugely popular annual tech summit.

Bechtle are a solutions first company, come and talk to us about your server and storage requirements.

For more information, please contact:

Niamh Burgess-Smith

Head of Infrastructure, Server and Storage

Tel: +44 1249 467 102

Email: niamh.burgess-smith@bechtle.com