i am often surprised to find that many developers today still do not really understand what is meant by a "client server" architecture or what the difference between "tiers" and "layers" is. so i thought i would post the following explanation which, if not universally accepted, has served me well over the past 10 years.

evolution of client/server systems

computer system architecture has evolved along with the capabilities of the hardware used to run applications. the simplest (and earliest) of all was the "mainframe architecture" in which all operations and functionality are contained within the central (or "host") computer. users interacted with the host through 'dumb' terminals which transmitted instructions, by capturing keystrokes, to the host and displayed the results of those instructions for the user.  such applications were typically character based and, despite the relatively large computing power of the mainframe hosts were often relatively slow and cumbersome to use because of the need to transmit every keystroke back to the host.

the introduction and widespread acceptance of the pc, with its own native computing power and graphical user interface made it possible for applications to become more sophisticated and the expansion of networked systems led to the second major type of system architecture, "file sharing". in this architecture the pc (or "workstation") downloads files from a dedicated "file server" and then runs the application (including data) locally. this works well when the shared usage is low, update contention is low, and the volume of data to be transferred is low. however it rapidly became clear that file sharing choked as networks grew larger, and the applications running on them grew more complex and required ever larger amounts of data to be transmitted back and forth.

the problems associated with handling large, data-centric applications, over file sharing networks led directly to the development of the client/server architecture in the early 1980s. in this approach the file server is replaced by a database server (the "server") which, instead of merely transmitting and saving files to its connected workstations (the  "clients") receives and actually executes requests for data, returning only the result sets to the client. by providing a query response rather than a total file transfer this architecture significantly decreases network traffic. this allowed for the development of applications in which multiple users could update data through gui front ends connected to a single shared database.

typically either structured query language (sql) or remote procedure calls (rpcs) are used to communicate between the client and server. there are several variants of the basic client/server architecture as described below.

the two tier architecture

in a two tier architecture the workload is divided between the server (which hosts the database) and the client (which hosts the user interface). in reality these are normally located on separate physical machines but there is no absolute requirement for this to be the case. providing that the tiers are logically separated they can be hosted (e.g. for development and testing) on the same computer (figure 1).

figure 1: basic two-tier architecture

the distribution of application logic and processing in this model was, and is, problematic. if the client is 'smart' and hosts the main application processing then there are issues associated with distributing, installing and maintaining the application because each client needs its own local copy of the software.  if the client is 'dumb' the application logic and processing must be implemented in the database and then becomes totally dependent on the specific dbms being used. in either scenario, each client must also have a log-in to the database and the necessary rights to carry out whatever functions are required by the application. however, the two tier client/server architecture proved to be a good solution when the user population work is relatively small (up to about 100 concurrent users) but it rapidly proved to have a number of limitations.

·         performance: as the user population grows, performance begins to deteriorate. this is the direct result of each user having their own connection to the server which means that the server has to keep all these connections live (using "keep-alive" messages) even when no work is being done

·         security:         each user must have their own individual access to the database, and be granted whatever rights may be required in order to run the application. apart from the security issues that this raises, maintaining users rapidly becomes a major task in its own right. this is especially problematic when new features/functionality have to be added to the application and users rights need to be updated

·         capability:      no matter what type of client is used, much of the data processing has to be located in the database which means that it is totally dependent upon the capabilities, and implementation, provided by the database manufacturer. this can seriously limit application functionality because different databases support different functionality, use different programming languages and even implement such basic tools as triggers differently

·         portability:      since the two-tier architecture is so dependent upon the specific database implementation, porting an existing application to a different dbms becomes a major issue. this is especially apparent in the case of vertical market applications where the choice of dbms is not determined by the vendor

having said that, this architecture found a new lease of life in the internet age. it can work well in a disconnected environment where the ui is essentially dumb (i.e. a browser). however, in many ways this implementation harks back to the original mainframe architecture and indeed, a browser based, two-tier application, can (and usually does) suffer from many of the same issues.

the three tier architecture

in an effort to overcome the limitations of the two-tier architecture outlined above, an additional tier was introduced – creating what is now the standard three-tier client/server model. the purpose of the additional tier (usually referred to as the "middle" or "rules" tier) is to handle application execution and database management. as with the two-tier model, the tiers can either be implemented on different physical machines (figure 2), or multiple tiers may be co-hosted on a single machine.


figure 2: basic three tier architecture

by introducing the middle tier, the limitations of the two-tier architecture are largely removed and the result is a much more flexible, and scalable, system. since clients now connect only to the application server, not directly to the data server, the load of maintaining connections is removed, as is the requirement to implement application logic within the database. the database can now be relegated to its proper role of managing the storage and retrieval of data, while application logic and processing can be handled in whatever application is most appropriate for the task. the development of operating systems to include such features as connection pooling, queuing and distributed transaction processing has enhanced (and simplified) the development of the middle tier.

notice that, in this model, the application server does not drive the user interface, nor does it actually handle data requests directly. instead it allows multiple clients to share business logic, computations, and access to the  data retrieval engine that it exposes. this has the major advantage that the client needs less software and no longer need a direct connection to the database, so there is less security to worry about. consequently applications are more scalable, and support and installation costs are significantly less for a single server than for maintaining applications directly on a desktop client or even a two-tier design.

there are many variants of the basic three-tier model designed to handle different application requirements. these include distributed transaction processing (where multiple dbms are updated in a single transaction), message based applications (where applications do not communicate in real-time) and cross-platform interoperability (object request broker or "orb" applications).

the multi or n-tier architecture

with the growth of internet based applications a common enhancement of the basic three-tier client server model has been the addition of extra tiers, such architecture is referred to as 'n-tier' and typically comprises four tiers (figure 3) where the web server is responsible for handling the connection between client browsers and the application server. the benefit is simply that multiple web servers can connect to a single application server, thereby handling more concurrent users.


figure 3: n-tier architecture

tiers vs layers

these terms are often (regrettably) used interchangeably. however they really are distinct and have definite meanings. the basic difference is that tiers are physical, while layers are logical. in other words a tier can theoretically be deployed independently on a dedicated computer, while a layer is a logical separation within a tier (figure 4). the typical three-tier model described above normally contains at least seven layers, split across the three tiers.

the key thing to remember about a layered architecture is that requests and responses each flow in one direction only and that layers may never be "skipped". thus in the model shown in figure 4, the only layer that can address layer "e" (the data access layer) is layer "d" (the rules layer). similarly layer "c" (the application validation layer) can only respond to requests from layer "b" (the error handling layer) .


figure 4: tiers are divided into logical layers


7 Responses to Introduction to Client Server Architecture

  • Ahmad Zrein says:

    Thanks for this introduction.

    But you did not elaborate or compare the advantage and disadvantage of both.

    Second what about the hardaware installed. we have different platform of hardware, different of performance, whish is better for the two models.


    The advantages and limitations of both models are outlined here so I am not sure what else you want. As for hardware, or which is ‘better’  the only answer to both is “it depends on your specific circumstances and requirements”. There is no single ‘generic’ answer to these questions. Sorry.

  • Eric J. Muñoz says:

    As always, thank you for sharing your knowledge, Andy. “1001 Things…” is still my personal bible, along with the rest of the Hentzen books.

    I was wondering about Terminal Services as an alternative to desktop, monolithic apps.

    What’s your point of view on this?

    Best regards.

    I haven’t done much with Terminal Server for several years, but it was (and I believe still is) a good alternative – especially if you consider that you may otherwise have to re-write your entire application. Several of our clients use TS with VFP with great success and although there are some minor issues overall we have been happy with th results.

  • Anil says:

    Nice info. But i m gona ask the stupid question here. i m working in few projects with vfp/sql i m using sever for database. and i have a my vfp exe on every terminal. everything is running fine. Now i m thinking about the middle tier?. i still not able to find what is middle tier for my application and how to implement in my application?. can u explain me bit. how to take advantage of middle tier in live application.

    Not a stupid question by any means! Hard to answer briefly though: As I said in the  article “As with the two-tier model, the tiers can either be implemented on different physical machines, or multiple tiers may be co-hosted on a single machine.” So your VFP EXE can actually comprise both Presentation tier and Middle tier.

    Providing that you have maintained the separation in your code, there is no absolute reason to separate the tiers physically. Alternatively you could set your application up so that the EXE is simply running the User Interface, and acessing an external VFP DLL that contains the “middle tier” functionality (database access and rules implementation for example). This is an architecture that I have used many times and is works well when a VFP desktop application shares data with a Web application. The DLL serves both “presentation” layers equally and allows for the maintenance of a single set of code for all presentation layers that need access to the data. See the next article in this series for a discussion on how to construct a Client/Server application

  • Kathy Brooks says:

    Thanks, I’m taking a course in e-commerce. First college course in 27 years.  I’m as technical as a peanut.  My text is great, but since I still get confused. Your website really helped me understand two tier vs three tier architecture.  I especially liked the drawings, they helped me further grasp it.

    Thanks again,

    Thank you for taking the time to post the comment, I really appreciate it and it is always nice to be able to help. Good luck with the course!

  • ravi sharma says:

    HI ther

    Very good information. I have a confusion between Web application and Client Server architecture.

    Can you please let me know it?

    Basically a web application is a type of Client. The whole point of the Client Server architecture is to define what responsibilities are allocated to the Client, and what to the Server. What the “client” is written in, or what is does, is totally irrelevant.  — Andy

  • Yoseph says:

    Thank you for posting this article. I am writing assignment about client server and this article has opened me a ground on how i can start writing on the back ground history of client/server.

  • suresh says:

    The graphics and explanations are very helpful.Awesome details

    Thank you

Leave a Reply

Your email address will not be published. Required fields are marked *