For many years the central computing model has oscillated between centralized and decentralized computing. The first computers, like ENIAC, were, in fact, personal computers, albeit large, because only one person could use one at a time.
Then came timeshare systems, in which many remote users at individual terminals shared a large central computer. Then came the PC era, in which users had their own personal computers again.
While the decentralized PC model has advantages, it also has some serious disadvantages that are only beginning to be taken seriously. Probably the biggest problem is that each PC has a large hard drive and complex software that must be maintained.
For example, when a new version of the operating system comes out, a lot of work must be done to perform the update on each machine separately. In most corporations, the labor costs of performing this type of software maintenance overshadow the actual costs of hardware and software.
For home users, labor is technically free, but few people can do it properly, and fewer still enjoy doing it.
Solution:- With a centralized system, only one or a few machines have to be upgraded, and those machines have a team of experts to get the job done.
A related problem is that users must regularly back up their gigabyte file systems, but few do. When disaster strikes, it tends to follow a lot of moaning and wringing of hands. With a centralized system, automated tape robots can back up every night. Another advantage is that sharing resources is easier with centralized systems.
A system with 256 remote users, each with 256 MB of RAM will have most of that RAM idle most of the time. With a centralized system with 64GB of RAM, it never happens that a user temporarily needs a lot of RAM but cannot get it because it is on someone else’s PC. The same argument is valid for disk space and other resources.
Finally, we are beginning to see a shift from PC-centric computing to web-centric computing. One area where this change is underway is email. People used to receive their email on their home machine and read it there. These days, many people log into Gmail, Hotmail, or Yahoo and read their mail there.
The next step is for people to log in to other websites to perform word processors, create spreadsheets, and other things that used to require PC software. It’s even possible that eventually the only software people run on your PC is a web browser, and maybe not even that.
It’s probably a fair conclusion to say that most users want high-performance interactive computing, but don’t really want to manage a computer.
This has led researchers to reexamine timeshare using dumb terminals (now politely called thin clients) that meet the expectations of modern terminals. X was a step in this direction, and dedicated X terminals were popular for a while, but fell out of favor because they cost as much as PCs, could do less, and still needed some software maintenance.
The holy grail would be a high-performance interactive computing system in which users’ machines did not have any software. Interestingly, this goal is achievable. Below we will explain one of those thin client systems, called THIN Client (THINC), developed by researchers at Columbia University
The basic idea here is to strip the client machine of all its intelligence and software and simply use it as a display, with all the computing (including building the bitmap for display) done on the server side. The protocol between the thin client and the server only tells the screen how to update the video RAM, nothing more. Five commands are used in the protocol between the two sides.
Let’s examine the commands now. Raw is used to transmit pixel data and display it verbatim on the screen. In principle, this is the only command necessary. The others are just optimizations. Copy instructs the screen to move data from one part of your video’s RAM to another part. It is useful to move around the screen without having to retransmit all the data.
Sfill fills a region of the screen with a single pixel value. Many screens have a uniform background of some color and this command is used to first generate the background, after which text, icons, and other elements can be painted.
Pfill replicates a pattern over some region. Also used for backgrounds, but some backgrounds are slightly more complex than a single color, in which case this command does the job.
Finally, Bitmap also paints a region, but with a foreground color and a background color. All in all, these are very simple commands, requiring very little software on the client side. All the complexity of building the bitmaps that fill the screen is done on the server in Thin Client. To improve efficiency, multiple commands can be added in a single packet for transmission over the network from server to client.
In Thin Client On the server side, graphics programs use high-level commands to paint the screen. These are intercepted by THIN Client software and translated into commands that can be sent to the client. Commands can be reordered to improve efficiency.
Thin Client document provides comprehensive performance metrics that run many common server applications at distances ranging from 10 km to 10,000 km from the client. Overall, performance outperformed other wide area network systems, even for real-time video.