Such trends as cloud computing, service oriented architecture (SOA), social media, Software as a Service (SaaS), and virtualization are combining and overlapping to upset the client landscape. If more of what more users are doing with their clients involves services, then shouldn’t the client be more services ready? Should we expect one client to do it all very well, or do we need to think more about specialized clients that might be configured on the fly?
Today’s clients are more tied to the past than the future, where one size fits all. Most clients consist of a handful of entrenched PC platforms, a handful of established Web browsers, and a handful of PC-like smartphones. But what has become popular on the server, virtualization, is taken to its full potential on these edge devices. New types of dynamic and task-specific client types might emerge. We’ll take a look at what they might look like.
Also, just as Windows 7 for Microsoft is quickly entering the global PC market, cloud providers are in an increasingly strong position to potentially favor certain client types or data and configuration synchronization approaches. Will the client lead the cloud or vice versa? We’ll talk about that too.
Either way, the new emphasis seems to be on full-media, webby activities, where standards and technologies are vying anew for some sort of a de-facto dominance across both rich applications as well as media presentation capabilities.
We look at the future of the client with a panel of analysts and guests: Chad Jones, vice president for product management at Neocleus; Michael Rowley, CTO of Active Endpoints; Jim Kobielus, senior analyst at Forrester Research; Michael Dortch, director of research at Focus; JP Morgenthal, chief architect, Merlin International; and Dave Linthicum, CTO, Bick Group. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.
Listen to the podcast (59:23 minutes).
Here are some excerpts:
Chad Jones: In the client market, it’s time for disruption. Looking at the general PC architectures, we have seen that since pretty much the inception of the computer, you really still have one operating system (OS) that’s bound to one machine, and that machine, according to a number of analysts, is less than 10 percent utilized.
Normally, that’s because you can’t share that resource and really take advantage of everything that modern hardware can offer you. Dual cores and all the gigabytes of RAM that are available on the client are all are great things, but if you can’t have an architecture that can take advantage of that in a big way, then you get more of the same.
On the client side, virtualization is moving into all forms of computing. We’ve seen that with applications, storage, networks and certainly the revolution that happened with VMware and the hypervisors on the server side. But the benefits from the server virtualization side were not only the ability to run multiple OSes side-by-side and consolidate servers, which is great, but definitely not as relevant to the client side. It’s really the ability to manage the machine at the machine level and be able to take OSes and move them as individual blocks of functionality in those workloads.
The same thing for the client can become possible when you start virtualizing that endpoint and stop doing management of the OS as management of the PC, and be able to manage that PC at the root level.
Imagine that you have your own personal Windows OS, that maybe you have signed up for Microsoft’s new Intune service to manage that from the cloud standpoint. Then, you have another Google OS that comes down with applications that are specific from that Google service, and that desktop is running in parallel with Windows, because it’s fully controlled from a cloud provider like Google. Something like Chrome OS is truly a cloud-based OS, where everything is supposed to be stored up in the cloud.
Those kinds of services, in turn, can converge into the PC, and virtualization can take that to the next level on the endpoint, so that those two things don’t overlap with each other, and a level of service, which is important for the cloud, certainly for service level agreements (SLAs), can truly be attained. There will be a lot of flexibility there.
Virtualization is a key enabler into that, and is going to open up PC architectures to a whole brave new world of management and security. And, at a platform level, there will be things that we’re not even seeing yet, things that developers can think of, because they have options to now run applications and agents and not be bound to just Windows itself. I think it’s going to be very interesting.
Dave Linthicum: Cloud providers will eventually get into desktop virtualization. It just seems to be the logical conclusion of where we’re heading right now.
In other words, we’re providing all these very heavy-duty IT services, such as database, OSes, and application servers on demand. It just makes sense that eventually we’re going to provide complete desktop virtualization offerings that pop out of the cloud.
The beauty of that is that a small business, instead of having to maintain an IT staff, will just have to maintain a few clients. They log into a cloud account and the virtualized desktops come down.
It provides disaster recovery based on the architecture. It provides great scalability, because basically you’re paying for each desktop instance and you’re not paying for more or less than you need. So, you’re not buying a data center or an inventory of computers and having to administer the users.
That said, it has a lot more cooking to occur, before we actually get the public clouds on that bandwagon. Over the next few years, it’s primarily going to be an enterprise concept and it’s going to be growing, but eventually it’s going to reach the cloud.
There are going to be larger companies. Google and Microsoft are going to jump on this. Microsoft is a prime candidate for making this thing work, as long as they can provide something as a service, which is going to have the price point that the small to medium-sized businesses (SMBs) are going to accept, because they are the early adopters.
Michael Rowley: When we talk about the client, we’re mostly thinking about the Web-browser-based client as opposed to the client as an entire virtualized OS. When you’re using a business process management system (BPMS) and you involve people, at some point somebody is going to need to pull work off of a work list and work on it and then eventually complete it and go and get the next piece of work.
That’s done in a Web-based environment, which isn’t particularly unusual. It’s a fairly rich environment, which is something that a lot of applications are going to. Web-based applications are going to a rich Internet application (RIA) style.
We have tried to take it even a step further and have taken advantage of the fact that by moving to some of these real infrastructures, you can do not just some of the presentation tier of an application on the client. You can do the entire presentation tier on the Web browser client and have its communication to the server, instead of being traditional HTML, have the entire presentation on the browser. Its communication uses more of a web-service approach and going directly into the services tier on the server. That server can be in a private cloud or, potentially, a public cloud.
What’s interesting is that by not having to install anything on the client, as with any of these discussions we are talking about, that’s an advantage, but also on the server, not having to have a different presentation tier that’s separate from your services tier.
Michael Dortch: There are going to continue to be proprietary approaches to solving these problems. As the Buddhists like to say, many paths, one mountain. That’s always going to be true. But we’ve got to keep our eyes on the ultimate goal here, and that is, how do you deliver the most compelling services to the largest number of users with the most efficient use of your development resources?
Until the debate shifts more in that direction and stops being so, I want to call it, religious about bits and bytes and speeds and feeds, progress is going to be hampered. But there’s good news in HTML5, Android, Chrome, and those things. At the end of the day, there’s going to be a lot of choices to be made.
The real choices to be made right now are centered on what path developers should take, so that, as the technologies evolve, they have to do as little ripping and replacing as possible. This is especially a challenge for larger companies running critical proprietary applications.
JP Morgenthal: I like to watch patterns. Look at where more applications have been created in the past three years, on what platform, and in what delivery mechanism than in any other way. Have they been Web apps or have they been iPhone/Android apps?
You’ve got to admit that the Web is a great vehicle for pure dynamic content. But at the end of the day, when there is a static portion of at least the framework and the way that the information is presented, nothing beats that client that’s already there going out and getting a small subset of information, bringing it back, and displaying it.
I see us moving back to that model. The Web is great for a fully connected high-bandwidth environment.
I’ve been following a lot about economics, especially U.S. economics, how the economy is going, and how it impacts everything. I had a great conversation with somebody who is in finance and investing, and we joked about how people are claiming they are getting evicted out of their homes. Their houses and homes are being foreclosed on. They can barely afford to eat. But everybody in the family has an iPhone with a data plan.
Look what necessity has become, at least in the U.S., and I know it’s probably similar in Korea, Japan, and parts of Europe. Your medium for delivery of content and information is that device in the palm that’s got about a 300×200 display.
I have got a Droid now. Everyday I see that little icon in the corner; I have got updates for you. I have updated my Seismic three times, and my USA Today. It tells me when to update. It automatically updates my client. It’s a very neutral type of platform, and it works very, very well as the main source for me to deliver content.
Now, sometimes, is that medium too small to get something more? Yeah. So where do I go? I go to my secondary source, which is my laptop. I use my phone as my usual connectivity medium to get my Internet.
So, while we have tremendous broadband capability growing around the world, we’re living in a wireless world, and wireless is becoming the common denominator for a delivery vehicle. It’s limiting and controlling what we can get down to the end user in the client format.
Jim Kobielus: In fact, it’s the whole notion of a PC being the paradigm here that’s getting deconstructed. It has been deconstructed up the yin yang. If you look at what a PC is, and we often think about a desktop, it’s actually simply a decomposition of services, rendering services, interaction services, connection and access, notifications, app execution, data processing, identity and authentication. These are all services that can and should be virtualized and abstracted to the cloud, private or public, because the clients themselves, the edges, are a losing battle, guys.
Try to pick winners here. This year, iPads are hot. Next year, it’s something else. The year beyond, it’s something else. What’s going to happen is — and we already know it’s happening — is that everything is getting hybridized like crazy.
All these different client or edge approaches are just going to continue to blur into each other. The important thing is that the PC becomes your personal cloud. It’s all of these services that are available to you. The common denominator here for you as a user is that somehow your identity is abstracted across all the disparate services that you have access to.
All of these services are aware that you are Dave Linthicum, coming in through your iPad, or you are Dave Linthicum coming in through a standard laptop Web browser, and so forth. Your identity and your content is all there and is all secure, in a sense, bringing process into there.
You don’t normally think of a process as being a service that’s specific to a client, but your hook into a process, any process, is your ability to log in. Then, have your credentials accepted and all of your privileges, permissions, and entitlements automatically provisioned to you.
Identity, in many ways, is the hook into this vast, personal cloud PC. That’s what’s happening.
Rowley: A lot of applications will really mix up the presentation of the work to be done by the people who are using the application, with the underlying business process that they are enabling.
If you can somehow tease those apart and get it so that the business process itself is represented, using something like a business process model, then have the work done by the person or people divided into a specific task that they are intended to do, you can have the task, at different times, be hosted by different kinds of clients.
Or, depending on the person, whether they’re using a smartphone or a full PC, they might get a different rendering of the task, without changing the application from the perspective of the business person who is trying to understand what’s going on. Where are we in this process? What has happened? What has to happen yet? Etc.
Then, for the rendering itself, it’s really useful to have that be as dynamic as possible and not have it be based on downloading an application, whether it’s an iPhone app or a PC app that needs to be updated, and you get a little sign that says you need to update this app or the other.
When you’re using something like HTML5, you can get it so that you get a lot of the functionality of some of these apps that currently you have to download, including things, as somebody brought up before, the question of what happens when you aren’t connected or are on partially connected computing.
Up until now, Web-based apps very much needed to be connected in order to do anything. HTML5 is going to include some capabilities around much more functionality that’s available, even when you’re disconnected. That will take the technology of a Web-based client to even more circumstances, where you would currently need to download one.
It’s a little bit of a change in thinking for some people to separate out those two concepts, the process from the UI for the individual task. But once you do, you get a lot of value for it.
Jones: I can see that as part of it as well. When you’re able to start taking abstraction of management and security from outside of those platforms and be able to treat that platform as a service, those things become much greater possibilities.
I believe one of the gentlemen earlier commented that a lot of it needs some time to percolate and cook, and that’s absolutely the case. But I see that within the next 10 years, the platform itself becomes a service, in which you can possibly choose which one you want. It’s delivered down from the cloud to you at a basic level.
That’s what you operate on, and then all of those other services come layered in on top of that as well, whether that’s partially through a concoction of virtualization and different OS platforms, coupled with cloud-based profiles, data access, applications and those things. That’s really the future that we’re going to see here in the next 15 years or so. …
For the near-term, as the client space begins to shake out over the next couple of years, the immediate benefits are first around being able to take our deployment of at least the Windows platform, from a current state of, let’s either have an image that’s done at Dell or more the case, whenever I do a hardware refresh, every three to four years, that’s when I deploy the OS. And, we take it to a point where you can actually get a PC and put it onto the network.
You take out all the complexity of what the deployment questions are and the installation that can cause so many different issues, combined with things like normalizing device driver models and those types of things, so that I can get that image and that computer out to the corporate standard very, very quickly, even if it’s out in the middle of Timbuktu. That’s one of the immediate benefits.
Plus, start looking at help desk and the whole concept of desktop visits. If Windows dies today, all of your agents and recovery and those types of things die with it. That means I’ve got to send back the PC or go through some lengthy process to try to talk the user through complicated procedures, and that’s just an expensive proposition.
You’re able to take remote-control capabilities outside of Windows into something that’s hardened at the PC level and say, OK, if Windows goes down, I can actually still connect to the PC as if I was local and remote connect to it and control it. It’s like what the IP-based KVMs did for the data center. You don’t even have to walk into the data center now. Imagine that on a grand scale for client computing.
Couple in a VPN with that. Someone is at a Starbucks, 20 minutes before a presentation, with a simple driver update that went awry and they can’t fix it. With one call to the help desk, they’re able to remote to that PC through the firewalls and take care of that issue to get them up and working.
Those are the areas that are the lowest hanging fruit, combined with amping up security in a completely new paradigm. Imagine an antivirus that works, looking inside of Windows, but operates in the same resource or collision domain, an execution environment where the virus is actually working, or trying to execute.
There is a whole level of security upgrades that you can do, where you catch the viruses on the space in between the network and actually getting to a compatible execution environment in Windows, where you quarantine it before it even gets to an OS instance. All those areas have huge potential.
You have got to keep that rich user experience of the PC, but yet change the architecture, so that it could become more highly manageable or become highly manageable, but also become flexible as well.
Imagine a world, just cutting very quickly in the utility sense, where I’ve got my call center of 5,000 seats and I’m doing an interactive process, but I have got a second cord dedicated to a headless virtual machine that’s doing mutual fund arbitrage apps or something like that in a grid, and feeding that back. You’re having 5,000 PCs doing that for you now at a very low cost rate, as opposed to building a whole data center capacity to take care of that. Those are kind of the futures where this type of technology can take you as well.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: Active Endpoints sponsored this podcast.