Menu

CHAPTER 2

CHAPTER 2:LITERATURE REVIEW2.1 Introduction
This chapter provides the literature review of the past work for the designing and implementation of charity website. It defines the meaning and illustrates the different architectures used on the Web. It then gives an overview on how the Web evolved since its birth at CERN in 1989 and presents the technologies that were introduced as the Web grew and are still commonly used to implement Web sites today.

Finally, this chapter also include defining the Software Architecture, the terminology used in this project, the architecture of the world wide web, the Communication Model of the World Wide Web among others. As previously stated, different programming languages are used in the web designing and implementation .

2.2 Defining the Software Architecture
As we move through the various stages of evolution of the Web, we point out problems in today’s Web development practice. This discussion results in the introduction of the research field of Web Designing and Implmentation, its mission statement and its goals – walkthrough some fundamentals of HTML, CSS and Javascript and its related technologies and standards. They build the foundation for the successors of HTML on the Web and keep spreading into other domains such as (cross-platform) data exchange or information storage (Brian, 2007).

2.2.1 Terminology
Understanding the meaning of the terms used in any given context is a crucial requirement for effective and unambiguous communication. Unfortunately quite some confusion exists when it comes to frequently used Web Designing and Implementation terms such as Web site, Web application or Web service. This section presents what we understand by these terms in the context of this project.

Web Page: A Web page is a set of information items which are perceived as an indivisible entity by the client application or browser. In the case of HTML, an HTML page would be a Web page. This term is sometimes (mis)used for ‘Web site’, i.e., meaning all the pages available through a given base URL (San, 2007).

Web Site: A Web site is a collection of static and/or dynamically generated Web pages that form a unit in terms of the content they provide, often share a common look-and-feel, and are available through the same base URL (Nova, 2011).

Web Application: A Web application is similar to a Web site in that it also presents related information in a uniform graphical layout. The focus of Web applications, however, lies in the application logic (functionality) offered via the Web. A Web application can be seen as a software application or business process leveraging the Web as a new type of user interface (Oktie, 2008). “In this project, a Web application will be loosely defined as a Web system (Web server, network, HTTP, browser) in which user input (navigation and data input) effects the state of the business. This definition attempts to establish that a Web application is a software system with business state, and that its front end is in large part delivered via a Web system.”
Web Service: The term Web service is definitely one of the most over-used terms in the Web arena. In the early days of the World Wide Web, many researchers and practitioners used it as a synonym for either a Website or a Web application. As such a Web service was a very generic term. More recently, the term was redefined in the context of machine-tomachine services (Hemnath, 2010). These services exchange machine readable information utilizing Web technology; the most prominent representative to date is the Simple Object Access Protocol (SOAP) that communicates via XML messages which are (usually) transmitted over HTTP.

Web Development: In the scope of this thesis, we understand ‘Web development’ as the actual implementation process of a Website. This does not include requirements gathering, domain analysis, or other phases such as design, maintenance or evolution frequently found in software and Web methodologies (Ron, 2009).

2.2.2 The Architecture of the World Wide Web
Right from the beginning, the World Wide Web was designed as a client-server system. Clients (usually Web browsers) access the content of a Web server using Uniform Resource Locators (URLs) that uniquely identify any resource on the Web. Such a URL contains (among other information) the Web server’s name, the protocol to contact it, and the name of the requested resource (Vanessa & Davide, 2008).
Modern Web sites are also described as three tier or multi tier architectures. This categorization
has its origins in the server-side separation of many Web sites in a Web server and a data repository/database layer. This again eases the development and maintenance of large or complex Web sites such as this charity website, but leaves the underlying client-server architecture untouched (Hemnath, 2010).

Figure 2.1 depicts the extended client-server architecture of the World Wide Web. Web browsers form the client-side of the system. These clients use client-side caching to improve performance and can use shared proxy caches to broaden the effectiveness of the cache from a single client to a cluster of clients. Web servers represent the server-side. A Web server frequently serves files from its local file system and collaborates with dedicated database servers that host the content repositories.

2.2.3 The Communication Model of the World Wide Web
The communication model on the Web is equally simple: client and server communicate via a TCP/IP connection and as soon as a client request is completed, the connection is terminated. The details of this synchronous communication protocol are specified in the HyperText Transfer
Protocol (HTTP) . HTTP is an ASCII-based protocol on top of TCP that transmits requests and responses enriched by a message header that carries additional status and meta information Ron, (Callari, 2009). Over time, HTTP evolved only marginally to support new requirements such as virtual hosts (i.e., running multiple servers with varying names on the same physical machine).

Again, HTTP is a state-less protocol, i.e., the server does not maintain state on behalf of the client. Thus a server could never identify subsequent requests of the same client as related. While this keeps the server simple and increases its performance, it is clearly insufficient in the context of, for instance, e-commerce applications where clients can create shopping carts and, at some later time, order all articles in the shopping cart (Marcus, 2008).
Historical Overview2.3.1 A Short History of Web Evolution
The origins of the idea of hypertext can be traced back to the 1940s (Sean & Palmer, 2001). The World Wide Web itself was ‘born’ in 1989 at the CERN laboratories. The following historical developments are presented here based on information available from the World Wide Web Consortium (W3C, 2009).

2.3.2 A First Server and Browser – The Web Infancy
In 1990, Tim Berners-Lee got the go-ahead for pursuing his idea and implemented the World wide web program—a ‘What You See Is What You Get’ Web browser and editor. It is remarkable to note that since then, Web clients were intended not only for viewing but also for editing Web documents.
In fact, it should be easy for everybody to edit documents on the Web. The first Web server at CERN initially contained mainly material about the Web itself (e.g., the specifications for HTML, HTTP, URLs, etc.) to help spreading the knowledge of how to run or implement a Web server and browser. More browsers for other platforms eventually appeared (Brian, 2007).

In the first three years, the load of the first Web server increased steadily by a factor of 10. When
academia and industry were taking notice, Tim Berners-Lee decided to found the World Wide
Web Consortium (W3C) to coordinate the efforts. According to him, “The Consortium is a neutral open forum where companies and organizations to whom the future of the Web is important come to discuss and to agree on new common computer protocols. It has been a center for issue raising, design, and decision by consensus, and also a fascinating vantage point from which to view that evolution.”
Here are some of the major development steps in catchwords: in December 1991, the first Web server outside Europe was installed (by Paul Kunz at the Stanford Linear Accelerator Center (SLAC)); in November 1992, a list of 26 reasonably reliable servers was published; in March 1993, the Web traffic (HTTP on port 80) on the NSF backbone was measured to be 0.1 percent;
in September 1993, the Web traffic increased to 1 percent of the NSF backbone traffic; in October 1993, about 200 Web servers exist; in May 1994, the first World Wide Web conference was held at CERN and is referred to as the ‘Woodstock of the Web’; in June 1994, 1500 Web servers exist; in October, the World Wide Web Consortium is founded (Christian, Tom, Heath & Tim, 2009).

With the increasing popularity of Web sites and HTML, developers soon started to demand language extensions to deal with rendering related information such as fonts, colors, margins, etc. It was already then that the abuse of HTML for layout-specific tasks started. A prominent example is a button with round corners. To achieve this in HTML, developers used a 3 x 3 table where the four corner cells contained a little picture simulating a rounded edge. If the background color of the table cells and the color used in the images is the same, the desired impression is achieved—at the cost of polluting the HTML code with tables and images solely contributing to the layout (Norasak, 2008).

Cascading Stylesheets (CSS) became a W3C recommendation in 1996 and support the specification of layout properties such as fonts, font sizes, colors, etc. externally to the actual HTML code. Support for CSS version 1 is built into all popular browsers today. A more powerful second version of CSS (CSS 2) extends the original specification supporting adding of text, a selection mechanism for elements and multiple classes of devices.
3.3.3 Object-Oriented Hypertext Design Method – OOHDM
The object-oriented hypertext design method (OOHDM) uses abstraction and composition mechanisms in an object-oriented framework to support the description of information items, navigational patterns and interface transformations. OOHDM splits the development process into four phases supporting incremental modeling, i.e., each phase adds new object-oriented models or enriches existing models from previous phases.

The conceptual design phase defines an object-oriented model of the application domain. It is only concerned about the semantics of the domain and does not include any user or task related
considerations. The model itself is a slight extension of the well-known class diagrams used (Sandoval & Bichler, 2010). The only difference is that relationships can be given an explicit direction and attributes of classes can have enumeration types (e.g., a sequence of value types rather than a single type). This is similar to defining collection types over an existing type system.

The main focus of OOHDM is on the navigational design. The navigational design defines an application as a navigational view over a conceptual domain model. The navigational model contains navigational classes including nodes, links, access structures and indices. The important
concept of a node defines what parts of the conceptual model are aggregated in a single node. Navigational nodes can also be thought of modeling the actual page structure of a Web site. Once the navigational model is completed, the navigational classes are grouped in so-called navigational contexts (Ossi, 2003).
The final phase of OOHDM is the implementation of the interface classes in terms of an actual development environment and platform. OOHDM does not propose or define any implementation technology or platform but suggests to store all modeling artifacts (conceptual classes, navigational classes, contexts, etc.) in one or more databases. Navigation contexts are then implemented as stateful objects that always keep track of the currently visible page and the other pages in the same context (e.g., to correctly switch to next or previous pages in a guided tour).

The integration of an layout information is envisioned using HTML templates that are enriched
by function calls to objects of the conceptual model to retrieve and embed dynamically calculated values.

3.3.4 A Simple Web Method – SWM
The simple Web method (SWM) tackles the problem that many developers find methodologies too complex and hard to understand (Nova, 2011). SWM is primarily intended for educational use and for inexperienced users. The fundamental philosophy for this approach besides being simple is to strongly support the early phases of the life-cycle of a Web application, provide tool support and traceability of changes. SWM distinguishes five phases. The first phase, planning, is concerned about feasibility and project management. In the design phase, the structure, the visual layout and the navigation style is defined and results in a set of storyboards. In the building phase, the actual Web application is built. The maintenance phase finally covers all activities after the initial deployment of the Web site.

Consequently, SWM operates on a very abstract level that does not provide much guidance for developers. A more innovative approach is taken in order to support the whole life-cycle of the application and project management.
3.3.5 The Www Design Technique – W3DT
The World Wide Web Design Technique (W3DT) consists of two parts: a modeling part that supports graphical models of the Web site and a computer-based design environment for the implementation of the model site. This is in contrast to the approaches presented so far that mainly focus on supporting the modeling part. One of the goals of W3DT is that the models should be clear and intuitively comprehensible at all times. Another important aspect is its modularity that supports distributed development, hierarchical decomposition of a site, and the development of distributed Web sites (Vanessa, & Davide, 2008).

Another important difference is that unlike RMM and OOHDM, W3DT does not start with a data or domain model but is user-centric in that it models the structure and pages of the final Web site and derives the data requirements from them. It also introduces another level of abstraction by introducing the W3DT meta model that defines how the modeling primitives are related (e.g., a site consists of a set of diagrams, each consisting of pages, layouts and links, etc.).

A concrete Web site is modeled using various modeling primitives such as pages, indices, forms and links to create an instance of the meta model that represents the structure of the site. Further, W3DT distinguishes between static information and dynamic information (i.e., information that is collected or created at runtime) already in the design phase.
The methodology does not distinguish separate phases; the building of the site model is the main task. Once the site model is finished, it can immediately be implemented in W3DT’s development environment called Web Designer. Web pages are implemented using HTML templates from which skeleton HTML files are generated that have to be completed in an HTML editor (Hemnath, 2010).

Separation of concerns is supported only marginally. Layouts are separated from the actual page
but consist only of attributes for the background color, the background image, a header line and a
footer line. While the simplicity of the method makes it easy to understand and the models simple.

The extended WWW design technique adds a new development process including an analysis, design, implementation and recurring evolution phase. It also conceptually separates the content production from its technical separation (i.e., the roles of the content manager and the programmer are separated). Further, user input processing is explicitly included but only to the extent as user actions directly manipulate the content of a database.
2.4Related Work
Several approaches towards designing and implementation of website has been enumerated in last decade, for instance, Ivory et al. (2000), Aladwani and Palvia (2002), Olsina and Rossi (2002), Moraga et al. (2004), Calero et al. (2005), Seffah et al. (2006), Abramowicz et al. (2008) and Olsina et al. (2009), etc.

Ivory et al. (2000) presents a methodology for evaluating information-centric websites. Five stages have been proposed in the methodology:
a) Identifying an exhaustive set of quantitative interface measures such as the amount of text on a page, colour usage, consistency, etc.

b) Computing measures for a large sample of rated interfaces c) Deriving statistical models from the measures and ratings
d) Using the models to predict ratings for new interfaces
e) Validating model prediction.

Aladwani and Palvia (2002) proposed 25-item instrument that captures key characteristics of website quality from the users’ perspective. The instrument was designed to measure four dimensions of web quality: specific content, content quality, appearance and technical adequacy.

Olsina and Rossi (2002) proposed the web quality evaluation method (WebQEM) to define an evaluation process in four technical phases:
Quality requirements definition and specification specifying characteristics and attributes based on the ISO/IEC 9126-1 (2001) such as usability, functionality, reliability, and effectiveness and taking into account web audience’s needs b) Elementary evaluation (applying metrics to quantify attributes)
c) Global evaluation (selecting aggregation criteria and a scoring model)
d) Conclusion (giving recommendations).

Nevertheless, evaluations take place mainly when the website is completed. Kahn et al. (2002) developed a model as the Product and Service Performance model for Information Quality (PSP/IQ).
In this model a quadrant was formed wherein column headings represented two views of quality, viz. ‘conforming to specifications’, and ‘meeting or exceeding consumer expectations’ while rows headings represented ‘product quality’ and ‘service quality’. The essential dimensions of IQ for delivering high quality information were identified as accessibility, appropriate amount of information, believability, completeness, concise representation, consistent representation, ease of manipulation, free of error, interpretability, objectivity, relevancy, reputation, security, timeliness, understandability and value added. These dimensions were mapped into the PSP/IQ quadrants according to whether they can be achieved by conformance to specifications or by considering the changing expectations of consumers. A questionnaire was prepared and data collected through survey and mean value was calculated for all four quadrants of the model. This model considered only sixteen dimensions and their impact on performance related to information quality, the interdependencies not being taken into account.

Lee et al. (2002) developed a methodology called AIM quality (AIMQ) to form a basis for Information Quality (IQ) assessment and benchmarking. The methodology encompasses a model of IQ, a questionnaire to measure IQ, and analysis techniques for interpreting the IQ measures. PSP/IQ model formed basis for further development of the AIMQ methodology and hence was an improvement upon the PSP/IQ model.
Data was collected through survey and was analysed using SPSS software for windows. The AIMQ methodology focussed on fifteen dimensions only. Barnes and Vidgen (2002) used webqual (a method for assessing the quality of Websites) and further developed it for quantitative analysis and the production of ecommerce metrics such as the WebQual Index.
WebQual Index was then further used for assessing an organization’s e-commerce capability. Three Internet bookstores were evaluated on the basis of WebQual Index – Amazon, BOL, and the Internet Bookstore.
Moraga et al. (2004) presented a Portal Quality Model towards port let evaluation. Portal Quality Model was based on the SERVQUAL model proposed by (Parasuraman et al. 1998). A new dimension Data Quality (DQ) defined as “Quality of the data contained in the portal” was added along with Tangible, Reliability, Responsiveness, Assurance and Empathy dimension.

Calero et al. (2005) presented the Web Quality Model (WQM), which was intended to evaluate a web application according to three dimensions: Web Features (content, presentation, and navigation); Quality Characteristics based on the ISO/IEC 9126-1 (2001) (functionality, reliability, usability, efficiency, portability, and maintainability); and Lifecycle Processes (development, operation and maintenance) including organizational processes such as project management and reuse programme management. WQM has been used to classify, according to these three dimensions, a total of 385 web metrics taken from existing literature.
Seffah et al. (2006) presented the Quality in Use Integrated Measurement (QUIM) as a consolidated model for usability measurement in web applications. QUIM combines existing models from ISO/IEC 9126-1 (2001), ISO/IEC 9241-11 (1998) and others. In this approach, usability is decomposed into factors, and then into criteria wherein a criterion can belong to different factors. Finally, these criteria are decomposed into specific metrics that can quantify the criteria.

Abramowicz et al. (2008) presented Square-based web services quality model consisting of three perspectives internal web service quality, external web service quality and web service quality in use.

Olsina et al. (2009) extended ISO/IEC 9126-1 (2001) model by including content quality as one of the major dimensions besides functionality, reliability, usability, efficiency, maintainability and portability.

Chiou et al. (2010) presented a web strategic framework for website evaluation. The framework was designed to be applied by a specific website in terms of its goals and objectives through a five-stage evaluation process. As such the framework was strategic oriented for specific website rather than an overall representative of general website evaluation.

Alkhattabi et al. (2010) focused on information quality (IQ) of e-learning websites considering nineteen quality dimensions. This paper represents dimension related specifically to e-learning web environment.

SummaryThe chapter has reviewed the literature pertaining to past work for the designing and implementation of not just charity website but for other website designs and development. It defines the meaning and illustrates the different architectures used on the Web. It then gives an overview on how the Web and presents the technologies.

Finally, this chapter also include defining the Software Architecture, the terminology used in this project, the architecture of the world wide web, the Communication Model of the World Wide Web among others.
The available literature on web designing and implementation has covered various domains like e-commerce, e-learning, e-governance etc. Most of the research papers have mentioned domain specific quality factors.
A need was felt to assimilate web designing and implementation factors representing contemporary advancement in web technologies.

Based on literature review, representative factors are identified for further analysis. Since the identified factors were considered sufficient to represent current scenario of websites designing and implementation, a need is felt for modelling the web designing and implementation.
The web designing and implementation system need to be developed to consider the combined impact of constituent subsystems and integrating them for applying a systems approach towards quantification of web quality. The chapter provided an overview from the evolution of the web. Web 1.0, web 2.0, web 3.0 and web 4.0 were described as four generations of the web. The characteristics of the generations are introduced and compared. Therefore, it is important to explore different resources that are both readily available and offer a better mixture and improvement in the strength of web designing and implementation .