5 Ways IT Leaders Rethink Endpoint as Memory Costs Rise

5 Ways IT Leaders are Rethinking Endpoints as Memory Costs Reshape the Market

5 Ways IT Leaders are Rethinking Endpoints as Memory Costs Reshape the Market

AI’s demand for memory is having an impact well beyond the data center. In fact, thin clients, repurposing, and smarter lifecycle strategies for endpoint computing are moving from niche to necessary.

Written By
Kevin Greenway
Kevin Greenway
May 8, 2026
4 minute read

The conversation around endpoints is changing, and not in subtle ways.

Rising memory costs, supply constraints, and AI-driven demand are forcing IT leaders to revisit assumptions that have gone largely unchallenged for years. The traditional refresh cycle is no longer just expensive. In many cases, it no longer makes strategic sense.

What is emerging instead is a more flexible, pragmatic approach to endpoint computing. One that prioritizes efficiency, extends the life of existing assets, and reduces dependence on volatile hardware markets.

Here are five shifts defining that transition.

1. Thin clients are moving from niche to mainstream

Thin clients have long been associated with specific use cases such as call centers or tightly controlled environments. That perception is changing quickly.

As more applications move to the cloud and virtual desktops become standard, the endpoint no longer needs to deliver high levels of local compute performance. Thin and zero clients are well-suited to this model because they are designed for secure, consistent access rather than heavy client-side processing.

In today’s market, that design becomes an advantage. Thin clients reduce reliance on components like DRAM, which are subject to price swings and supply constraints. They also simplify management and create a more predictable cost structure over time.

For many organizations, thin clients are no longer an alternative. They are becoming the default choice for a growing percentage of users.

See also: How AI Is Forcing an IT Infrastructure Rethink

2. Repurposing is replacing a wholesale refresh

The second shift is less about new technology and more about changing mindset. Instead of automatically replacing aging devices, organizations are taking a closer look at what those devices are still capable of doing. In many cases, older laptops, desktops, and even aging thin clients can be repurposed into modern thin client endpoints with the help of lightweight operating systems.

This approach allows IT teams to extend the useful life of hardware by several years without sacrificing access to modern applications. Users continue working in cloud or virtual environments, while the endpoint simply serves as a secure connection point.

The benefit is immediate. Fewer devices need to be replaced, capital expenditures are reduced, and organizations gain flexibility in how and when they invest in new hardware.

See also: Groups Focus on Infrastructure for AI and High-Performance Workloads

Advertisement

3. The endpoint is becoming an access layer

Underlying both thin client adoption and repurposing is a broader architectural shift.

Work is no longer tied to the endpoint. Applications run in browsers, data lives in the cloud, and desktops can be delivered virtually. The endpoint’s role is shifting from a standalone compute device to an access layer that connects users to centralized resources.

That change has significant implications. Performance is increasingly defined by the cloud environment rather than the hardware on the desk. As a result, the need for high-spec endpoints across the entire organization is diminishing.

IT leaders can begin to align device strategy with actual workload requirements instead of assuming that every user needs a fully provisioned PC.

See also: Data Pipelines in the Age of Agentic AI

4. Lifecycle planning is becoming more flexible

For years, refresh cycles were treated as fixed timelines. Devices reached a certain age and were replaced, regardless of how they were being used.

That approach is becoming harder to justify in a market shaped by supply volatility and rising costs. Organizations are moving toward more flexible lifecycle strategies that reflect real-world usage.

Some endpoints may still need to be replaced on a regular schedule, particularly for high-performance roles. Others can be extended, repurposed, or transitioned to thin client models. The key is to move away from a one-size-fits-all approach that can be susceptible to surges in costs and supply issues.

This flexibility allows IT teams to avoid buying hardware at the peak of pricing cycles and to make more deliberate, cost-effective decisions.

See also: Why Layered and Agentic AI Demand a New Kind of Data Infrastructure

Advertisement

5. Sustainability is becoming a practical driver

Cost is not the only factor influencing these decisions. Sustainability is also playing a larger role.

Extending the life of existing hardware reduces e-waste and lowers the environmental impact associated with manufacturing and disposal. Lifecycle analyses from Interzero and Fraunhofer UMSICHT suggest that reuse can reduce emissions by up to 37 percent.

What makes this shift notable is that sustainability is not being treated as a separate initiative. It is becoming a natural outcome of more efficient endpoint strategies.

Organizations that repurpose hardware and reduce refresh frequency are not only saving money but also making measurable progress toward environmental goals.

A different kind of endpoint strategy

Taken together, these shifts point to a broader change in how endpoints are evaluated and deployed.

The focus is moving away from keeping pace with hardware cycles and toward delivering consistent, secure access to modern work environments. Thin clients, repurposed devices, and cloud-delivered desktops are all part of that transition.

This does not mean the end of the PC. It means the end of treating every user the same when it comes to endpoint requirements. The organizations that adapt most effectively will be those that recognize this moment for what it is: not just a period of market disruption, but an opportunity to build a more resilient, cost-efficient, and sustainable approach to endpoint computing.

Kevin Greenway

Kevin Greenway joined 10ZiG in 2012 and became CTO in 2015. He leads the company’s overall technology and product strategy, collaborating with global teams to ensure continuous innovation in a fast-paced, disruptive market. Under his leadership, 10ZiG delivers modern, managed, and secure endpoints through a unified hardware and software approach. A computer science graduate with numerous IT certifications, Kevin has more than 25 years of experience in the IT sector, including remote connectivity, terminal emulation, VoIP, unified communications, and VDI remoting protocols. Since joining 10ZiG, he has focused exclusively on VDI and End User Computing (EUC) and oversees strategic technology alliances with leading partners such as Citrix, Microsoft, and Omnissa.

Featured Resources from Cloud Data Insights

Does the Business Still Need a Functional Translator?
Tim Wintrip
May 7, 2026
Shadow AI Is Making BYOD Security Even Harder
Matt Stern
May 6, 2026
How to Calculate the Real ROI of AI Agents
The Unstructured Data Problem Businesses Can No Longer Ignore
Steve Leeper
May 4, 2026
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.