Search Engine Optimsation Parramatta

Search Engine Optimsation Parramatta

SEO Tools Parramatta

Maximise your business potential with Parramatta conversion optimisation We deliver impactful strategies designed to boost your brand awareness, improve online visibility, and generate a steady flow of qualified leads in Parramatta

Take your digital presence further with Parramatta marketing funnel We develop custom strategies aimed at increasing your online visibility, improving search engine rankings, and achieving sustainable growth for your Parramatta-based business

Experience outstanding online performance through Optimised website content Parramatta Our expert team specialises in delivering solutions that improve rankings, drive engagement, and generate valuable leads for consistent business growth in Parramatta

Best SEO Agency Parramatta Australia. Best SEO Parramatta Agency.

Experience outstanding online performance through Top web developers Parramatta Our expert team specialises in delivering solutions that improve rankings, drive engagement, and generate valuable leads for consistent business growth in Parramatta

Maximise your business potential with Digital agency Parramatta We deliver impactful strategies designed to boost your brand awareness, improve online visibility, and generate a steady flow of qualified leads in Parramatta

Choose excellence in digital marketing with Search visibility Parramatta Our proven approaches drive website traffic, enhance customer engagement, and significantly improve conversion rates, supporting long-term business success in Parramatta

Local SEO .

Search Engine Optimsation Parramatta - SEO Training Parramatta

  1. SEO Audit Services Parramatta
  2. SEO Performance Metrics Parramatta
  3. SEO Landing Pages Parramatta

Citations and other Useful links

Parramatta UI UX design

Maximise your business potential with Web and SEO consulting Parramatta We deliver impactful strategies designed to boost your brand awareness, improve online visibility, and generate a steady flow of qualified leads in Parramatta

Experience outstanding online performance through Parramatta design and development Our expert team specialises in delivering solutions that improve rankings, drive engagement, and generate valuable leads for consistent business growth in Parramatta

Choose excellence in digital marketing with Results-driven SEO Parramatta Our proven approaches drive website traffic, enhance customer engagement, and significantly improve conversion rates, supporting long-term business success in Parramatta

Best SEO Packages Parramatta Parramatta.

Search Engine Optimsation Parramatta - SEO Training Parramatta

  1. Parramatta SEO Workshops
  2. Parramatta Online Visibility
  3. Parramatta SEO Consultants
Parramatta UI UX design
Conversion-focused web design Parramatta

Conversion-focused web design Parramatta

Transform your business growth with SEO Parramatta Our strategies enhance visibility, attract targeted traffic, and maximise conversions for sustained success Partner with us for measurable digital marketing outcomes today

Take your digital presence further with Web Design Parramatta We develop custom strategies aimed at increasing your online visibility, improving search engine rankings, and achieving sustainable growth for your Parramatta-based business

Experience outstanding online performance through Local SEO Parramatta Our expert team specialises in delivering solutions that improve rankings, drive engagement, and generate valuable leads for consistent business growth in Parramatta



Search Engine Optimsation Parramatta - SEO Tools Parramatta

  1. Parramatta Organic Traffic Growth
  2. SEO Consulting Parramatta
  3. SEO-driven Lead Generation Parramatta
  4. Parramatta SEO Agency

Fast-loading websites Parramatta

Choose excellence in digital marketing with Parramatta SEO services Our proven approaches drive website traffic, enhance customer engagement, and significantly improve conversion rates, supporting long-term business success in Parramatta

Take your digital presence further with Parramatta web design agency We develop custom strategies aimed at increasing your online visibility, improving search engine rankings, and achieving sustainable growth for your Parramatta-based business

Experience outstanding online performance through Search Engine Optimisation Parramatta Our expert team specialises in delivering solutions that improve rankings, drive engagement, and generate valuable leads for consistent business growth in Parramatta

Fast-loading websites Parramatta
SEO landing pages Parramatta
SEO landing pages Parramatta

Maximise your business potential with Affordable SEO Parramatta We deliver impactful strategies designed to boost your brand awareness, improve online visibility, and generate a steady flow of qualified leads in Parramatta

Take your digital presence further with Custom web design Parramatta We develop custom strategies aimed at increasing your online visibility, improving search engine rankings, and achieving sustainable growth for your Parramatta-based business

Experience outstanding online performance through eCommerce web design Parramatta Our expert team specialises in delivering solutions that improve rankings, drive engagement, and generate valuable leads for consistent business growth in Parramatta

Parramatta SEO marketing

Transform your business growth with Parramatta digital marketing Our strategies enhance visibility, attract targeted traffic, and maximise conversions for sustained success Partner with us for measurable digital marketing outcomes today

Maximise your business potential with Best SEO agency Parramatta We deliver impactful strategies designed to boost your brand awareness, improve online visibility, and generate a steady flow of qualified leads in Parramatta

Choose excellence in digital marketing with SEO expert Parramatta Our proven approaches drive website traffic, enhance customer engagement, and significantly improve conversion rates, supporting long-term business success in Parramatta



Search Engine Optimsation Parramatta - SEO Tools Parramatta

  1. SEO Tools Parramatta
  2. Local Map SEO Parramatta
  3. Website Speed Optimisation Parramatta
  4. SEO Training Parramatta
Parramatta SEO marketing

 

A tag cloud (a typical Web 3.0 phenomenon in itself) presenting Web 3.0 themes

The Semantic Web, sometimes known as Web 3.0 (not to be confused with Web3), is an extension of the World Wide Web through standards[1] set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.

To enable the encoding of semantics with the data, technologies such as Resource Description Framework (RDF)[2] and Web Ontology Language (OWL)[3] are used. These technologies are used to formally represent metadata. For example, ontology can describe concepts, relationships between entities, and categories of things. These embedded semantics offer significant advantages such as reasoning over data and operating with heterogeneous data sources.[4] These standards promote common data formats and exchange protocols on the Web, fundamentally the RDF. According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries."[5] The Semantic Web is therefore regarded as an integrator across different content and information applications and systems.

History

[edit]

The term was coined by Tim Berners-Lee for a web of data (or data web)[6] that can be processed by machines[7]—that is, one in which much of the meaning is machine-readable. While its critics have questioned its feasibility, proponents argue that applications in library and information science, industry, biology and human sciences research have already proven the validity of the original concept.[8]

Berners-Lee originally expressed his vision of the Semantic Web in 1999 as follows:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A "Semantic Web", which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The "intelligent agents" people have touted for ages will finally materialize.[9]

The 2001 Scientific American article by Berners-Lee, Hendler, and Lassila described an expected evolution of the existing Web to a Semantic Web.[10] In 2006, Berners-Lee and colleagues stated that: "This simple idea…remains largely unrealized".[11] In 2013, more than four million Web domains (out of roughly 250 million total) contained Semantic Web markup.[12]

Example

[edit]

In the following example, the text "Paul Schuster was born in Dresden" on a website will be annotated, connecting a person with their place of birth. The following HTML fragment shows how a small graph is being described, in RDFa-syntax using a schema.org vocabulary and a Wikidata ID:

<div vocab="https://schema.org/" typeof="Person">
  <span property="name">Paul Schuster</span> was born in
  <span property="birthPlace" typeof="Place" href="https://www.wikidata.org/entity/Q1731">
    <span property="name">Dresden</span>.
  </span>
</div>
Graph resulting from the RDFa example
 

The example defines the following five triples (shown in Turtle syntax). Each triple represents one edge in the resulting graph: the first element of the triple (the subject) is the name of the node where the edge starts, the second element (the predicate) the type of the edge, and the last and third element (the object) either the name of the node where the edge ends or a literal value (e.g. a text, a number, etc.).

 _:a <https://www.w3.org/1999/02/22-rdf-syntax-ns#type> <https://schema.org/Person> .
 _:a <https://schema.org/name> "Paul Schuster" .
 _:a <https://schema.org/birthPlace> <https://www.wikidata.org/entity/Q1731> .
 <https://www.wikidata.org/entity/Q1731> <https://schema.org/itemtype> <https://schema.org/Place> .
 <https://www.wikidata.org/entity/Q1731> <https://schema.org/name> "Dresden" .

The triples result in the graph shown in the given figure.

Graph resulting from the RDFa example, enriched with further data from the Web

One of the advantages of using Uniform Resource Identifiers (URIs) is that they can be dereferenced using the HTTP protocol. According to the so-called Linked Open Data principles, such a dereferenced URI should result in a document that offers further data about the given URI. In this example, all URIs, both for edges and nodes (e.g. http://schema.org/Person, http://schema.org/birthPlace, http://www.wikidata.org/entity/Q1731) can be dereferenced and will result in further RDF graphs, describing the URI, e.g. that Dresden is a city in Germany, or that a person, in the sense of that URI, can be fictional.

The second graph shows the previous example, but now enriched with a few of the triples from the documents that result from dereferencing https://schema.org/Person (green edge) and https://www.wikidata.org/entity/Q1731 (blue edges).

Additionally to the edges given in the involved documents explicitly, edges can be automatically inferred: the triple

 _:a <https://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> .

from the original RDFa fragment and the triple

 <https://schema.org/Person> <http://www.w3.org/2002/07/owl#equivalentClass> <http://xmlns.com/foaf/0.1/Person> .

from the document at https://schema.org/Person (green edge in the figure) allow to infer the following triple, given OWL semantics (red dashed line in the second Figure):

 _:a <https://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> .

Background

[edit]

The concept of the semantic network model was formed in the early 1960s by researchers such as the cognitive scientist Allan M. Collins, linguist Ross Quillian and psychologist Elizabeth F. Loftus as a form to represent semantically structured knowledge. When applied in the context of the modern internet, it extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. This enables automated agents to access the Web more intelligently and perform more tasks on behalf of users. The term "Semantic Web" was coined by Tim Berners-Lee,[7] the inventor of the World Wide Web and director of the World Wide Web Consortium ("W3C"), which oversees the development of proposed Semantic Web standards. He defines the Semantic Web as "a web of data that can be processed directly and indirectly by machines".

Many of the technologies proposed by the W3C already existed before they were positioned under the W3C umbrella. These are used in various contexts, particularly those dealing with information that encompasses a limited and defined domain, and where sharing data is a common necessity, such as scientific research or data exchange among businesses. In addition, other technologies with similar goals have emerged, such as microformats.

Limitations of HTML

[edit]

Many files on a typical computer can be loosely divided into either human-readable documents, or machine-readable data. Examples of human-readable document files are mail messages, reports, and brochures. Examples of machine-readable data files are calendars, address books, playlists, and spreadsheets, which are presented to a user using an application program that lets the files be viewed, searched, and combined.

Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags provide a method by which computers can categorize the content of web pages. In the examples below, the field names "keywords", "description" and "author" are assigned values such as "computing", and "cheap widgets for sale" and "John Doe".

<meta name="keywords" content="computing, computer studies, computer" />
<meta name="description" content="Cheap widgets for sale" />
<meta name="author" content="John Doe" />

Because of this metadata tagging and categorization, other computer systems that want to access and share this data can easily identify the relevant values.

With HTML and a tool to render it (perhaps web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as "this document's title is 'Widget Superstore'", but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text "X586172" is something that should be positioned near "Acme Gizmo" and "€199", etc. There is no way to say "this is a catalog" or even to establish that "Acme Gizmo" is a kind of title or that "€199" is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page.

Semantic HTML refers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of <em> denoting "emphasis" rather than <i>, which specifies italics. Layout details are left up to the browser, in combination with Cascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices.

Microformats extend HTML syntax to create machine-readable semantic markup about objects including people, organizations, events and products.[13] Similar initiatives include RDFa, Microdata and Schema.org.

Semantic Web solutions

[edit]

The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts.

These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases,[14] or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research.

An example of a tag that would be used in a non-semantic web page:

<item>blog</item>

Encoding similar information in a semantic web page might look like this:

<item rdf:about="https://example.org/semantic-web/">Semantic Web</item>

Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based World Wide Web. Berners-Lee posits that if the past was document sharing, the future is data sharing. His answer to the question of "how" provides three points of instruction. One, a URL should point to the data. Two, anyone accessing the URL should get data back. Three, relationships in the data should point to additional URLs with data.

Tags and identifiers

[edit]

Tags, including hierarchical categories and tags that are collaboratively added and maintained (e.g. with folksonomies) can be considered part of, of potential use to or a step towards the semantic Web vision.[15][16][17]

Unique identifiers, including hierarchical categories and collaboratively added ones, analysis tools and metadata, including tags, can be used to create forms of semantic webs – webs that are to a certain degree semantic.[18] In particular, such has been used for structuring scientific research i.a. by research topics and scientific fields by the projects OpenAlex,[19][20][21] Wikidata and Scholia which are under development and provide APIs, Web-pages, feeds and graphs for various semantic queries.

Web 3.0

[edit]

Tim Berners-Lee has described the Semantic Web as a component of Web 3.0.[22]

People keep asking what Web 3.0 is. I think maybe when you've got an overlay of scalable vector graphics – everything rippling and folding and looking misty – on Web 2.0 and access to a semantic Web integrated across a huge space of data, you'll have access to an unbelievable data resource …

— Tim Berners-Lee, 2006

"Semantic Web" is sometimes used as a synonym for "Web 3.0",[23] though the definition of each term varies.

Beyond Web 3.0

[edit]

The next generation of the Web is often termed Web 4.0, but its definition is not clear. According to some sources, it is a Web that involves artificial intelligence,[24] the internet of things, pervasive computing, ubiquitous computing and the Web of Things among other concepts.[25] According to the European Union, Web 4.0 is "the expected fourth generation of the World Wide Web. Using advanced artificial and ambient intelligence, the internet of things, trusted blockchain transactions, virtual worlds and XR capabilities, digital and real objects and environments are fully integrated and communicate with each other, enabling truly intuitive, immersive experiences, seamlessly blending the physical and digital worlds".[26]

Challenges

[edit]

Some of the challenges for the Semantic Web include vastness, vagueness, uncertainty, inconsistency, and deceit. Automated reasoning systems will have to deal with all of these issues in order to deliver on the promise of the Semantic Web.

  • Vastness: The World Wide Web contains many billions of pages. The SNOMED CT medical terminology ontology alone contains 370,000 class names, and existing technology has not yet been able to eliminate all semantically duplicated terms. Any automated reasoning system will have to deal with truly huge inputs.
  • Vagueness: These are imprecise concepts like "young" or "tall". This arises from the vagueness of user queries, of concepts represented by content providers, of matching query terms to provider terms and of trying to combine different knowledge bases with overlapping but subtly different concepts. Fuzzy logic is the most common technique for dealing with vagueness.
  • Uncertainty: These are precise concepts with uncertain values. For example, a patient might present a set of symptoms that correspond to a number of different distinct diagnoses each with a different probability. Probabilistic reasoning techniques are generally employed to address uncertainty.
  • Inconsistency: These are logical contradictions that will inevitably arise during the development of large ontologies, and when ontologies from separate sources are combined. Deductive reasoning fails catastrophically when faced with inconsistency, because "anything follows from a contradiction". Defeasible reasoning and paraconsistent reasoning are two techniques that can be employed to deal with inconsistency.
  • Deceit: This is when the producer of the information is intentionally misleading the consumer of the information. Cryptography techniques are currently utilized to alleviate this threat. By providing a means to determine the information's integrity, including that which relates to the identity of the entity that produced or published the information, however credibility issues still have to be addressed in cases of potential deceit.

This list of challenges is illustrative rather than exhaustive, and it focuses on the challenges to the "unifying logic" and "proof" layers of the Semantic Web. The World Wide Web Consortium (W3C) Incubator Group for Uncertainty Reasoning for the World Wide Web[27] (URW3-XG) final report lumps these problems together under the single heading of "uncertainty".[28] Many of the techniques mentioned here will require extensions to the Web Ontology Language (OWL) for example to annotate conditional probabilities. This is an area of active research.[29]

Standards

[edit]

Standardization for Semantic Web in the context of Web 3.0 is under the care of W3C.[30]

Components

[edit]

The term "Semantic Web" is often used more specifically to refer to the formats and technologies that enable it.[5] The collection, structuring and recovery of linked data are enabled by technologies that provide a formal description of concepts, terms, and relationships within a given knowledge domain. These technologies are specified as W3C standards and include:

The Semantic Web Stack illustrates the architecture of the Semantic Web. The functions and relationships of the components can be summarized as follows:[31]

  • XML provides an elemental syntax for content structure within documents, yet associates no semantics with the meaning of the content contained within. XML is not at present a necessary component of Semantic Web technologies in most cases, as alternative syntaxes exist, such as Turtle. Turtle is a de facto standard, but has not been through a formal standardization process.
  • XML Schema is a language for providing and restricting the structure and content of elements contained within XML documents.
  • RDF is a simple language for expressing data models, which refer to objects ("web resources") and their relationships. An RDF-based model can be represented in a variety of syntaxes, e.g., RDF/XML, N3, Turtle, and RDFa. RDF is a fundamental standard of the Semantic Web.[32][33]
  • RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
  • OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. "exactly one"), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes.
  • SPARQL is a protocol and query language for semantic web data sources.
  • RIF is the W3C Rule Interchange Format. It is an XML language for expressing Web rules that computers can execute. RIF provides multiple versions, called dialects. It includes a RIF Basic Logic Dialect (RIF-BLD) and RIF Production Rules Dialect (RIF PRD).

Current state of standardization

[edit]

Well-established standards:

Not yet fully realized:

Applications

[edit]

The intent is to enhance the usability and usefulness of the Web and its interconnected resources by creating semantic web services, such as:

  • Servers that expose existing data systems using the RDF and SPARQL standards. Many converters to RDF exist from different applications.[34] Relational databases are an important source. The semantic web server attaches to the existing system without affecting its operation.
  • Documents "marked up" with semantic information (an extension of the HTML <meta> tags used in today's Web pages to supply information for Web search engines using web crawlers). This could be machine-understandable information about the human-understandable content of the document (such as the creator, title, description, etc.) or it could be purely metadata representing a set of facts (such as resources and services elsewhere on the site). Note that anything that can be identified with a Uniform Resource Identifier (URI) can be described, so the semantic web can reason about animals, people, places, ideas, etc. There are four semantic annotation formats that can be used in HTML documents; Microformat, RDFa, Microdata and JSON-LD.[35] Semantic markup is often generated automatically, rather than manually.
Arguments as distinct semantic units with specified relations and version control on Kialo
  • Common metadata vocabularies (ontologies) and maps between vocabularies that allow document creators to know how to mark up their documents so that agents can use the information in the supplied metadata (so that Author in the sense of 'the Author of the page' will not be confused with Author in the sense of a book that is the subject of a book review).
  • Automated agents to perform tasks for users of the semantic web using this data.
  • Semantic translation. An alternative or complementary approach are improvements to contextual and semantic understanding of texts – these could be aided via Semantic Web methods so that only increasingly small numbers of mistranslations need to be corrected in manual or semi-automated post-editing.
  • Web-based services (often with agents of their own) to supply information specifically to agents, for example, a Trust service that an agent could ask if some online store has a history of poor service or spamming.
  • Semantic Web ideas are implemented in collaborative structured argument mapping sites where their relations are organized semantically, arguments can be mirrored (linked) to multiple places, reused (copied), rated, and changed as semantic distinct units. Ideas for such, or a more widely adopted "World Wide Argument Web", go back to at least 2007[36] and have been implemented to some degree in Argüman[37] and Kialo. Further steps towards semantic web services may include enabling "Querying", argument search engines,[38] and "summarizing the contentious and agreed-upon points of a discussion".[39]

Such services could be useful to public search engines, or could be used for knowledge management within an organization. Business applications include:

  • Facilitating the integration of information from mixed sources[40]
  • Dissolving ambiguities in corporate terminology
  • Improving information retrieval thereby reducing information overload and increasing the refinement and precision of the data retrieved[41][42][43][44]
  • Identifying relevant information with respect to a given domain[45]
  • Providing decision making support

In a corporation, there is a closed group of users and the management is able to enforce company guidelines like the adoption of specific ontologies and use of semantic annotation. Compared to the public Semantic Web there are lesser requirements on scalability and the information circulating within a company can be more trusted in general; privacy is less of an issue outside of handling of customer data.

Skeptical reactions

[edit]

Practical feasibility

[edit]

Critics question the basic feasibility of a complete or even partial fulfillment of the Semantic Web, pointing out both difficulties in setting it up and a lack of general-purpose usefulness that prevents the required effort from being invested. In a 2003 paper, Marshall and Shipman point out the cognitive overhead inherent in formalizing knowledge, compared to the authoring of traditional web hypertext:[46]

While learning the basics of HTML is relatively straightforward, learning a knowledge representation language or tool requires the author to learn about the representation's methods of abstraction and their effect on reasoning. For example, understanding the class-instance relationship, or the superclass-subclass relationship, is more than understanding that one concept is a "type of" another concept. [...] These abstractions are taught to computer scientists generally and knowledge engineers specifically but do not match the similar natural language meaning of being a "type of" something. Effective use of such a formal representation requires the author to become a skilled knowledge engineer in addition to any other skills required by the domain. [...] Once one has learned a formal representation language, it is still often much more effort to express ideas in that representation than in a less formal representation [...]. Indeed, this is a form of programming based on the declaration of semantic data and requires an understanding of how reasoning algorithms will interpret the authored structures.

According to Marshall and Shipman, the tacit and changing nature of much knowledge adds to the knowledge engineering problem, and limits the Semantic Web's applicability to specific domains. A further issue that they point out are domain- or organization-specific ways to express knowledge, which must be solved through community agreement rather than only technical means.[46] As it turns out, specialized communities and organizations for intra-company projects have tended to adopt semantic web technologies greater than peripheral and less-specialized communities.[47] The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the World-Wide Web.[47]

Finally, Marshall and Shipman see pragmatic problems in the idea of (Knowledge Navigator-style) intelligent agents working in the largely manually curated Semantic Web:[46]

In situations in which user needs are known and distributed information resources are well described, this approach can be highly effective; in situations that are not foreseen and that bring together an unanticipated array of information resources, the Google approach is more robust. Furthermore, the Semantic Web relies on inference chains that are more brittle; a missing element of the chain results in a failure to perform the desired action, while the human can supply missing pieces in a more Google-like approach. [...] cost-benefit tradeoffs can work in favor of specially-created Semantic Web metadata directed at weaving together sensible well-structured domain-specific information resources; close attention to user/customer needs will drive these federations if they are to be successful.

Cory Doctorow's critique ("metacrap")[48] is from the perspective of human behavior and personal preferences. For example, people may include spurious metadata into Web pages in an attempt to mislead Semantic Web engines that naively assume the metadata's veracity. This phenomenon was well known with metatags that fooled the Altavista ranking algorithm into elevating the ranking of certain Web pages: the Google indexing engine specifically looks for such attempts at manipulation. Peter Gärdenfors and Timo Honkela point out that logic-based semantic web technologies cover only a fraction of the relevant phenomena related to semantics.[49][50]

Censorship and privacy

[edit]

Enthusiasm about the semantic web could be tempered by concerns regarding censorship and privacy. For instance, text-analyzing techniques can now be easily bypassed by using other words, metaphors for instance, or by using images in place of words. An advanced implementation of the semantic web would make it much easier for governments to control the viewing and creation of online information, as this information would be much easier for an automated content-blocking machine to understand. In addition, the issue has also been raised that, with the use of FOAF files and geolocation meta-data, there would be very little anonymity associated with the authorship of articles on things such as a personal blog. Some of these concerns were addressed in the "Policy Aware Web" project[51] and is an active research and development topic.

Doubling output formats

[edit]

Another criticism of the semantic web is that it would be much more time-consuming to create and publish content because there would need to be two formats for one piece of data: one for human viewing and one for machines. However, many web applications in development are addressing this issue by creating a machine-readable format upon the publishing of data or the request of a machine for such data. The development of microformats has been one reaction to this kind of criticism. Another argument in defense of the feasibility of semantic web is the likely falling price of human intelligence tasks in digital labor markets, such as Amazon's Mechanical Turk.[citation needed]

Specifications such as eRDF and RDFa allow arbitrary RDF data to be embedded in HTML pages. The GRDDL (Gleaning Resource Descriptions from Dialects of Language) mechanism allows existing material (including microformats) to be automatically interpreted as RDF, so publishers only need to use a single format, such as HTML.

Research activities on corporate applications

[edit]

The first research group explicitly focusing on the Corporate Semantic Web was the ACACIA team at INRIA-Sophia-Antipolis, founded in 2002. Results of their work include the RDF(S) based Corese[52] search engine, and the application of semantic web technology in the realm of distributed artificial intelligence for knowledge management (e.g. ontologies and multi-agent systems for corporate semantic Web) [53] and E-learning.[54]

Since 2008, the Corporate Semantic Web research group, located at the Free University of Berlin, focuses on building blocks: Corporate Semantic Search, Corporate Semantic Collaboration, and Corporate Ontology Engineering.[55]

Ontology engineering research includes the question of how to involve non-expert users in creating ontologies and semantically annotated content[56] and for extracting explicit knowledge from the interaction of users within enterprises.

Future of applications

[edit]

Tim O'Reilly, who coined the term Web 2.0, proposed a long-term vision of the Semantic Web as a web of data, where sophisticated applications are navigating and manipulating it.[57] The data web transforms the World Wide Web from a distributed file system into a distributed database.[58]

See also

[edit]

References

[edit]
  1. ^ Semantic Web at W3C: https://www.w3.org/standards/semanticweb/
  2. ^ "World Wide Web Consortium (W3C), "RDF/XML Syntax Specification (Revised)", 25 Feb. 2014".
  3. ^ "World Wide Web Consortium (W3C), "OWL Web Ontology Language Overview", W3C Recommendation, 10 Feb. 2004".
  4. ^ Chung, Seung-Hwa (2018). "The MOUSE approach: Mapping Ontologies using UML for System Engineers". Computer Reviews Journal: 8–29. ISSN 2581-6640.
  5. ^ a b "W3C Semantic Web Activity". World Wide Web Consortium (W3C). November 7, 2011. Retrieved November 26, 2011.
  6. ^ "Q&A with Tim Berners-Lee, Special Report". Bloomberg. Retrieved 14 April 2018.
  7. ^ a b Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). "The Semantic Web". Scientific American. Retrieved July 2, 2019.
  8. ^ Lee Feigenbaum (May 1, 2007). "The Semantic Web in Action". Scientific American. Retrieved February 24, 2010.
  9. ^ Berners-Lee, Tim; Fischetti, Mark (1999). Weaving the Web. HarperSanFrancisco. chapter 12. ISBN 978-0-06-251587-2.
  10. ^ Berners-Lee, Tim; Hendler, James; Lassila, Ora (May 17, 2001). "The Semantic Web" (PDF). Scientific American. Vol. 284, no. 5. pp. 34–43. JSTOR 26059207. S2CID 56818714. Archived from the original (PDF) on October 10, 2017. Retrieved March 13, 2008.
  11. ^ Nigel Shadbolt; Wendy Hall; Tim Berners-Lee (2006). "The Semantic Web Revisited" (PDF). IEEE Intelligent Systems. Archived from the original (PDF) on March 20, 2013. Retrieved April 13, 2007.
  12. ^ Ramanathan V. Guha (2013). "Light at the End of the Tunnel". International Semantic Web Conference 2013 Keynote. Retrieved March 8, 2015.
  13. ^ Allsopp, John (March 2007). Microformats: Empowering Your Markup for Web 2.0. Friends of ED. p. 368. ISBN 978-1-59059-814-6.
  14. ^ Artem Chebotko and Shiyong Lu, "Querying the Semantic Web: An Efficient Approach Using Relational Databases", LAP Lambert Academic Publishing, ISBN 978-3-8383-0264-5, 2009.
  15. ^ "Towards the Semantic Web: Collaborative Tag Suggestions" (PDF).
  16. ^ Specia, Lucia; Motta, Enrico (2007). "Integrating Folksonomies with the Semantic Web". The Semantic Web: Research and Applications. Lecture Notes in Computer Science. Vol. 4519. Springer. pp. 624–639. doi:10.1007/978-3-540-72667-8_44. ISBN 978-3-540-72666-1.
  17. ^ "Bridging the gap between folksonomies and the semantic web: an experience report" (PDF).
  18. ^ Nicholson, Josh M.; Mordaunt, Milo; Lopez, Patrice; Uppala, Ashish; Rosati, Domenic; Rodrigues, Neves P.; Grabitz, Peter; Rife, Sean C. (5 November 2021). "scite: A smart citation index that displays the context of citations and classifies their intent using deep learning". Quantitative Science Studies. 2 (3): 882–898. doi:10.1162/qss_a_00146.
  19. ^ Singh Chawla, Dalmeet (24 January 2022). "Massive open index of scholarly papers launches". Nature. doi:10.1038/d41586-022-00138-y. Retrieved 14 February 2022.
  20. ^ "OpenAlex: The Promising Alternative to Microsoft Academic Graph". Singapore Management University (SMU). Retrieved 14 February 2022.
  21. ^ "OpenAlex Documentation". Retrieved 18 February 2022.
  22. ^ Shannon, Victoria (23 May 2006). "A 'more revolutionary' Web". International Herald Tribune. Retrieved 26 June 2006.
  23. ^ "Web 3.0 Explained, Plus the History of Web 1.0 and 2.0". Investopedia. Retrieved 2022-10-21.
  24. ^ https://www.rsisinternational.org/IJRSI/Issue31/75-78.pdf
  25. ^ Almeida, F. (2017). Concept and dimensions of web 4.0. International journal of computers and technology, 16(7).
  26. ^ "The Commission wants the EU to lead on 'Web 4.0' — whatever that is". 11 July 2023.
  27. ^ "W3C Uncertainty Reasoning for the World Wide Web". www.w3.org. Retrieved 2021-05-14.
  28. ^ "Uncertainty Reasoning for the World Wide Web". W3.org. Retrieved 20 December 2018.
  29. ^ Lukasiewicz, Thomas; Umberto Straccia (2008). "Managing uncertainty and vagueness in description logics for the Semantic Web" (PDF). Web Semantics: Science, Services and Agents on the World Wide Web. 6 (4): 291–308. doi:10.1016/j.websem.2008.04.001.
  30. ^ "Semantic Web Standards". W3.org. Retrieved 14 April 2018.
  31. ^ "OWL Web Ontology Language Overview". World Wide Web Consortium (W3C). February 10, 2004. Retrieved November 26, 2011.
  32. ^ "Resource Description Framework (RDF)". World Wide Web Consortium.
  33. ^ Allemang, Dean; Hendler, James; Gandon, Fabien (August 3, 2020). Semantic Web for the Working Ontologist : Effective Modeling for Linked Data, RDFS, and OWL (Third ed.). [New York, NY, USA]: ACM Books; 3rd edition. ISBN 978-1450376143.
  34. ^ "ConverterToRdf - W3C Wiki". W3.org. Retrieved 20 December 2018.
  35. ^ Sikos, Leslie F. (2015). Mastering Structured Data on the Semantic Web: From HTML5 Microdata to Linked Open Data. Apress. p. 23. ISBN 978-1-4842-1049-9.
  36. ^ Kiesel, Johannes; Lang, Kevin; Wachsmuth, Henning; Hornecker, Eva; Stein, Benno (14 March 2020). "Investigating Expectations for Voice-based and Conversational Argument Search on the Web". Proceedings of the 2020 Conference on Human Information Interaction and Retrieval. ACM. pp. 53–62. doi:10.1145/3343413.3377978. ISBN 9781450368926. S2CID 212676751.
  37. ^ Vetere, Guido (30 June 2018). "L'impossibile necessità delle piattaforme sociali decentralizzate". DigitCult - Scientific Journal on Digital Cultures. 3 (1): 41–50. doi:10.4399/97888255159096.
  38. ^ Bikakis, Antonis; Flouris, Giorgos; Patkos, Theodore; Plexousakis, Dimitris (2023). "Sketching the vision of the Web of Debates". Frontiers in Artificial Intelligence. 6. doi:10.3389/frai.2023.1124045. ISSN 2624-8212. PMC 10313200. PMID 37396970.
  39. ^ Schneider, Jodi; Groza, Tudor; Passant, Alexandre. "A Review of Argumentation for the Social Semantic Web" (PDF). cite journal: Cite journal requires |journal= (help)
  40. ^ Zhang, Chuanrong; Zhao, Tian; Li, Weidong (2015). Geospatial Semantic Web. Springer International Publishing : Imprint: Springer. ISBN 978-3-319-17801-1.
  41. ^ Omar Alonso and Hugo Zaragoza. 2008. Exploiting semantic annotations in information retrieval: ESAIR '08. SIGIR Forum 42, 1 (June 2008), 55–58. doi:10.1145/1394251.1394262
  42. ^ Jaap Kamps, Jussi Karlgren, and Ralf Schenkel. 2011. Report on the third workshop on exploiting semantic annotations in information retrieval (ESAIR). SIGIR Forum 45, 1 (May 2011), 33–41. doi:10.1145/1988852.1988858
  43. ^ Jaap Kamps, Jussi Karlgren, Peter Mika, and Vanessa Murdock. 2012. Fifth workshop on exploiting semantic annotations in information retrieval: ESAIR '12). In Proceedings of the 21st ACM international conference on information and knowledge management (CIKM '12). ACM, New York, NY, USA, 2772–2773. doi:10.1145/2396761.2398761
  44. ^ Omar Alonso, Jaap Kamps, and Jussi Karlgren. 2015. Report on the Seventh Workshop on Exploiting Semantic Annotations in Information Retrieval (ESAIR '14). SIGIR Forum 49, 1 (June 2015), 27–34. doi:10.1145/2795403.2795412
  45. ^ Kuriakose, John (September 2009). "Understanding and Adopting Semantic Web Technology". Cutter IT Journal. 22 (9). CUTTER INFORMATION CORP.: 10–18.
  46. ^ a b c Marshall, Catherine C.; Shipman, Frank M. (2003). Which semantic web? (PDF). Proc. ACM Conf. on Hypertext and Hypermedia. pp. 57–66. Archived from the original (PDF) on 2015-09-23. Retrieved 2015-04-17.
  47. ^ a b Ivan Herman (2007). State of the Semantic Web (PDF). Semantic Days 2007. Retrieved July 26, 2007.
  48. ^ Doctorow, Cory. "Metacrap: Putting the torch to seven straw-men of the meta-utopia". www.well.com/. Retrieved 11 September 2023.
  49. ^ Gärdenfors, Peter (2004). How to make the Semantic Web more semantic. IOS Press. pp. 17–34. cite book: |work= ignored (help)
  50. ^ Honkela, Timo; Könönen, Ville; Lindh-Knuutila, Tiina; Paukkeri, Mari-Sanna (2008). "Simulating processes of concept formation and communication". Journal of Economic Methodology. 15 (3): 245–259. doi:10.1080/13501780802321350. S2CID 16994027.
  51. ^ "Policy Aware Web Project". Policyawareweb.org. Retrieved 2013-06-14.
  52. ^ Corby, Olivier; Dieng-Kuntz, Rose; Zucker, Catherine Faron; Gandon, Fabien (2006). "Searching the Semantic Web: Approximate Query Processing based on Ontologies". IEEE Intelligent Systems. 21: 20–27. doi:10.1109/MIS.2006.16. S2CID 11488848.
  53. ^ Gandon, Fabien (7 November 2002). Distributed Artificial Intelligence And Knowledge Management: Ontologies And Multi-Agent Systems For A Corporate Semantic Web (phdthesis). Université Nice Sophia Antipolis.
  54. ^ Buffa, Michel; Dehors, Sylvain; Faron-Zucker, Catherine; Sander, Peter (2005). "Towards a Corporate Semantic Web Approach in Designing Learning Systems: Review of the Trial Solutioins Project" (PDF). International Workshop on Applications of Semantic Web Technologies for E-Learning. Amsterdam, Holland. pp. 73–76.
  55. ^ "Corporate Semantic Web - Home". Corporate-semantic-web.de. Retrieved 14 April 2018.
  56. ^ Hinze, Annika; Heese, Ralf; Luczak-Rösch, Markus; Paschke, Adrian (2012). "Semantic Enrichment by Non-Experts: Usability of Manual Annotation Tools" (PDF). ISWC'12 - Proceedings of the 11th international conference on The Semantic Web. Boston, USA. pp. 165–181.
  57. ^ Mathieson, S. A. (6 April 2006). "Spread the word, and join it up". The Guardian. Retrieved 14 April 2018.
  58. ^ Spivack, Nova (18 September 2007). "The Semantic Web, Collective Intelligence and Hyperdata". novaspivack.typepad.com/nova_spivacks_weblog [This Blog has Moved to NovaSpivack.com]. Retrieved 14 April 2018.

Further reading

[edit]
[edit]

 

Web design encompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic design; user interface design (UI design); authoring, including standardised code and proprietary software; user experience design (UX design); and search engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all.[1] The term "web design" is normally used to describe the design process relating to the front-end (client side) design of a website including writing markup. Web design partially overlaps web engineering in the broader scope of web development. Web designers are expected to have an awareness of usability and be up to date with web accessibility guidelines.

History

[edit]
Web design books in a store

1988–2001

[edit]

Although web design has a fairly recent history, it can be linked to other areas such as graphic design, user experience, and multimedia arts, but is more aptly seen from a technological standpoint. It has become a large part of people's everyday lives. It is hard to imagine the Internet without animated graphics, different styles of typography, backgrounds, videos and music. The web was announced on August 6, 1991; in November 1992, CERN was the first website to go live on the World Wide Web. During this period, websites were structured by using the <table> tag which created numbers on the website. Eventually, web designers were able to find their way around it to create more structures and formats. In early history, the structure of the websites was fragile and hard to contain, so it became very difficult to use them. In November 1993, ALIWEB was the first ever search engine to be created (Archie Like Indexing for the WEB).[2]

The start of the web and web design

[edit]

In 1989, whilst working at CERN in Switzerland, British scientist Tim Berners-Lee proposed to create a global hypertext project, which later became known as the World Wide Web. From 1991 to 1993 the World Wide Web was born. Text-only HTML pages could be viewed using a simple line-mode web browser.[3] In 1993 Marc Andreessen and Eric Bina, created the Mosaic browser. At the time there were multiple browsers, however the majority of them were Unix-based and naturally text-heavy. There had been no integrated approach to graphic design elements such as images or sounds. The Mosaic browser broke this mould.[4] The W3C was created in October 1994 to "lead the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability."[5] This discouraged any one company from monopolizing a proprietary browser and programming language, which could have altered the effect of the World Wide Web as a whole. The W3C continues to set standards, which can today be seen with JavaScript and other languages. In 1994 Andreessen formed Mosaic Communications Corp. that later became known as Netscape Communications, the Netscape 0.9 browser. Netscape created its HTML tags without regard to the traditional standards process. For example, Netscape 1.1 included tags for changing background colours and formatting text with tables on web pages. From 1996 to 1999 the browser wars began, as Microsoft and Netscape fought for ultimate browser dominance. During this time there were many new technologies in the field, notably Cascading Style Sheets, JavaScript, and Dynamic HTML. On the whole, the browser competition did lead to many positive creations and helped web design evolve at a rapid pace.[6]

Evolution of web design

[edit]

In 1996, Microsoft released its first competitive browser, which was complete with its features and HTML tags. It was also the first browser to support style sheets, which at the time was seen as an obscure authoring technique and is today an important aspect of web design.[6] The HTML markup for tables was originally intended for displaying tabular data. However, designers quickly realized the potential of using HTML tables for creating complex, multi-column layouts that were otherwise not possible. At this time, as design and good aesthetics seemed to take precedence over good markup structure, little attention was paid to semantics and web accessibility. HTML sites were limited in their design options, even more so with earlier versions of HTML. To create complex designs, many web designers had to use complicated table structures or even use blank spacer .GIF images to stop empty table cells from collapsing.[7] CSS was introduced in December 1996 by the W3C to support presentation and layout. This allowed HTML code to be semantic rather than both semantic and presentational and improved web accessibility, see tableless web design.

In 1996, Flash (originally known as FutureSplash) was developed. At the time, the Flash content development tool was relatively simple compared to now, using basic layout and drawing tools, a limited precursor to ActionScript, and a timeline, but it enabled web designers to go beyond the point of HTML, animated GIFs and JavaScript. However, because Flash required a plug-in, many web developers avoided using it for fear of limiting their market share due to lack of compatibility. Instead, designers reverted to GIF animations (if they did not forego using motion graphics altogether) and JavaScript for widgets. But the benefits of Flash made it popular enough among specific target markets to eventually work its way to the vast majority of browsers, and powerful enough to be used to develop entire sites.[7]

End of the first browser wars

[edit]

In 1998, Netscape released Netscape Communicator code under an open-source licence, enabling thousands of developers to participate in improving the software. However, these developers decided to start a standard for the web from scratch, which guided the development of the open-source browser and soon expanded to a complete application platform.[6] The Web Standards Project was formed and promoted browser compliance with HTML and CSS standards. Programs like Acid1, Acid2, and Acid3 were created in order to test browsers for compliance with web standards. In 2000, Internet Explorer was released for Mac, which was the first browser that fully supported HTML 4.01 and CSS 1. It was also the first browser to fully support the PNG image format.[6] By 2001, after a campaign by Microsoft to popularize Internet Explorer, Internet Explorer had reached 96% of web browser usage share, which signified the end of the first browser wars as Internet Explorer had no real competition.[8]

2001–2012

[edit]

Since the start of the 21st century, the web has become more and more integrated into people's lives. As this has happened the technology of the web has also moved on. There have also been significant changes in the way people use and access the web, and this has changed how sites are designed.

Since the end of the browsers wars[when?] new browsers have been released. Many of these are open source, meaning that they tend to have faster development and are more supportive of new standards. The new options are considered by many[weasel words] to be better than Microsoft's Internet Explorer.

The W3C has released new standards for HTML (HTML5) and CSS (CSS3), as well as new JavaScript APIs, each as a new but individual standard.[when?] While the term HTML5 is only used to refer to the new version of HTML and some of the JavaScript APIs, it has become common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript).

2012 and later

[edit]

With the advancements in 3G and LTE internet coverage, a significant portion of website traffic shifted to mobile devices. This shift influenced the web design industry, steering it towards a minimalist, lighter, and more simplistic style. The "mobile first" approach emerged as a result, emphasizing the creation of website designs that prioritize mobile-oriented layouts first, before adapting them to larger screen dimensions.

Tools and technologies

[edit]

Web designers use a variety of different tools depending on what part of the production process they are involved in. These tools are updated over time by newer standards and software but the principles behind them remain the same. Web designers use both vector and raster graphics editors to create web-formatted imagery or design prototypes. A website can be created using WYSIWYG website builder software or a content management system, or the individual web pages can be hand-coded in just the same manner as the first web pages were created. Other tools web designers might use include markup validators[9] and other testing tools for usability and accessibility to ensure their websites meet web accessibility guidelines.[10]

UX Design

[edit]

One popular tool in web design is UX Design, a type of art that designs products to perform an accurate user background. UX design is very deep. UX is more than the web, it is very independent, and its fundamentals can be applied to many other browsers or apps. Web design is mostly based on web-based things. UX can overlap both web design and design. UX design mostly focuses on products that are less web-based.[11]

Skills and techniques

[edit]

Marketing and communication design

[edit]

Marketing and communication design on a website may identify what works for its target market. This can be an age group or particular strand of culture; thus the designer may understand the trends of its audience. Designers may also understand the type of website they are designing, meaning, for example, that (B2B) business-to-business website design considerations might differ greatly from a consumer-targeted website such as a retail or entertainment website. Careful consideration might be made to ensure that the aesthetics or overall design of a site do not clash with the clarity and accuracy of the content or the ease of web navigation,[12] especially on a B2B website. Designers may also consider the reputation of the owner or business the site is representing to make sure they are portrayed favorably. Web designers normally oversee all the websites that are made on how they work or operate on things. They constantly are updating and changing everything on websites behind the scenes. All the elements they do are text, photos, graphics, and layout of the web. Before beginning work on a website, web designers normally set an appointment with their clients to discuss layout, colour, graphics, and design. Web designers spend the majority of their time designing websites and making sure the speed is right. Web designers typically engage in testing and working, marketing, and communicating with other designers about laying out the websites and finding the right elements for the websites.[13]

User experience design and interactive design

[edit]

User understanding of the content of a website often depends on user understanding of how the website works. This is part of the user experience design. User experience is related to layout, clear instructions, and labeling on a website. How well a user understands how they can interact on a site may also depend on the interactive design of the site. If a user perceives the usefulness of the website, they are more likely to continue using it. Users who are skilled and well versed in website use may find a more distinctive, yet less intuitive or less user-friendly website interface useful nonetheless. However, users with less experience are less likely to see the advantages or usefulness of a less intuitive website interface. This drives the trend for a more universal user experience and ease of access to accommodate as many users as possible regardless of user skill.[14] Much of the user experience design and interactive design are considered in the user interface design.

Advanced interactive functions may require plug-ins if not advanced coding language skills. Choosing whether or not to use interactivity that requires plug-ins is a critical decision in user experience design. If the plug-in doesn't come pre-installed with most browsers, there's a risk that the user will have neither the know-how nor the patience to install a plug-in just to access the content. If the function requires advanced coding language skills, it may be too costly in either time or money to code compared to the amount of enhancement the function will add to the user experience. There's also a risk that advanced interactivity may be incompatible with older browsers or hardware configurations. Publishing a function that doesn't work reliably is potentially worse for the user experience than making no attempt. It depends on the target audience if it's likely to be needed or worth any risks.

Progressive enhancement

[edit]
The order of progressive enhancement

Progressive enhancement is a strategy in web design that puts emphasis on web content first, allowing everyone to access the basic content and functionality of a web page, whilst users with additional browser features or faster Internet access receive the enhanced version instead.

In practice, this means serving content through HTML and applying styling and animation through CSS to the technically possible extent, then applying further enhancements through JavaScript. Pages' text is loaded immediately through the HTML source code rather than having to wait for JavaScript to initiate and load the content subsequently, which allows content to be readable with minimum loading time and bandwidth, and through text-based browsers, and maximizes backwards compatibility.[15]

As an example, MediaWiki-based sites including Wikipedia use progressive enhancement, as they remain usable while JavaScript and even CSS is deactivated, as pages' content is included in the page's HTML source code, whereas counter-example Everipedia relies on JavaScript to load pages' content subsequently; a blank page appears with JavaScript deactivated.

Page layout

[edit]

Part of the user interface design is affected by the quality of the page layout. For example, a designer may consider whether the site's page layout should remain consistent on different pages when designing the layout. Page pixel width may also be considered vital for aligning objects in the layout design. The most popular fixed-width websites generally have the same set width to match the current most popular browser window, at the current most popular screen resolution, on the current most popular monitor size. Most pages are also center-aligned for concerns of aesthetics on larger screens.

Fluid layouts increased in popularity around 2000 to allow the browser to make user-specific layout adjustments to fluid layouts based on the details of the reader's screen (window size, font size relative to window, etc.). They grew as an alternative to HTML-table-based layouts and grid-based design in both page layout design principles and in coding technique but were very slow to be adopted.[note 1] This was due to considerations of screen reading devices and varying windows sizes which designers have no control over. Accordingly, a design may be broken down into units (sidebars, content blocks, embedded advertising areas, navigation areas) that are sent to the browser and which will be fitted into the display window by the browser, as best it can. Although such a display may often change the relative position of major content units, sidebars may be displaced below body text rather than to the side of it. This is a more flexible display than a hard-coded grid-based layout that doesn't fit the device window. In particular, the relative position of content blocks may change while leaving the content within the block unaffected. This also minimizes the user's need to horizontally scroll the page.

Responsive web design is a newer approach, based on CSS3, and a deeper level of per-device specification within the page's style sheet through an enhanced use of the CSS @media rule. In March 2018 Google announced they would be rolling out mobile-first indexing.[16] Sites using responsive design are well placed to ensure they meet this new approach.

Typography

[edit]

Web designers may choose to limit the variety of website typefaces to only a few which are of a similar style, instead of using a wide range of typefaces or type styles. Most browsers recognize a specific number of safe fonts, which designers mainly use in order to avoid complications.

Font downloading was later included in the CSS3 fonts module and has since been implemented in Safari 3.1, Opera 10, and Mozilla Firefox 3.5. This has subsequently increased interest in web typography, as well as the usage of font downloading.

Most site layouts incorporate negative space to break the text up into paragraphs and also avoid center-aligned text.[17]

Motion graphics

[edit]

The page layout and user interface may also be affected by the use of motion graphics. The choice of whether or not to use motion graphics may depend on the target market for the website. Motion graphics may be expected or at least better received with an entertainment-oriented website. However, a website target audience with a more serious or formal interest (such as business, community, or government) might find animations unnecessary and distracting if only for entertainment or decoration purposes. This doesn't mean that more serious content couldn't be enhanced with animated or video presentations that is relevant to the content. In either case, motion graphic design may make the difference between more effective visuals or distracting visuals.

Motion graphics that are not initiated by the site visitor can produce accessibility issues. The World Wide Web consortium accessibility standards require that site visitors be able to disable the animations.[18]

Quality of code

[edit]

Website designers may consider it to be good practice to conform to standards. This is usually done via a description specifying what the element is doing. Failure to conform to standards may not make a website unusable or error-prone, but standards can relate to the correct layout of pages for readability as well as making sure coded elements are closed appropriately. This includes errors in code, a more organized layout for code, and making sure IDs and classes are identified properly. Poorly coded pages are sometimes colloquially called tag soup. Validating via W3C[9] can only be done when a correct DOCTYPE declaration is made, which is used to highlight errors in code. The system identifies the errors and areas that do not conform to web design standards. This information can then be corrected by the user.[19]

Generated content

[edit]

There are two ways websites are generated: statically or dynamically.

Static websites

[edit]

A static website stores a unique file for every page of a static website. Each time that page is requested, the same content is returned. This content is created once, during the design of the website. It is usually manually authored, although some sites use an automated creation process, similar to a dynamic website, whose results are stored long-term as completed pages. These automatically created static sites became more popular around 2015, with generators such as Jekyll and Adobe Muse.[20]

The benefits of a static website are that they were simpler to host, as their server only needed to serve static content, not execute server-side scripts. This required less server administration and had less chance of exposing security holes. They could also serve pages more quickly, on low-cost server hardware. This advantage became less important as cheap web hosting expanded to also offer dynamic features, and virtual servers offered high performance for short intervals at low cost.

Almost all websites have some static content, as supporting assets such as images and style sheets are usually static, even on a website with highly dynamic pages.

Dynamic websites

[edit]

Dynamic websites are generated on the fly and use server-side technology to generate web pages. They typically extract their content from one or more back-end databases: some are database queries across a relational database to query a catalog or to summarise numeric information, and others may use a document database such as MongoDB or NoSQL to store larger units of content, such as blog posts or wiki articles.

In the design process, dynamic pages are often mocked-up or wireframed using static pages. The skillset needed to develop dynamic web pages is much broader than for a static page, involving server-side and database coding as well as client-side interface design. Even medium-sized dynamic projects are thus almost always a team effort.

When dynamic web pages first developed, they were typically coded directly in languages such as Perl, PHP or ASP. Some of these, notably PHP and ASP, used a 'template' approach where a server-side page resembled the structure of the completed client-side page, and data was inserted into places defined by 'tags'. This was a quicker means of development than coding in a purely procedural coding language such as Perl.

Both of these approaches have now been supplanted for many websites by higher-level application-focused tools such as content management systems. These build on top of general-purpose coding platforms and assume that a website exists to offer content according to one of several well-recognised models, such as a time-sequenced blog, a thematic magazine or news site, a wiki, or a user forum. These tools make the implementation of such a site very easy, and a purely organizational and design-based task, without requiring any coding.

Editing the content itself (as well as the template page) can be done both by means of the site itself and with the use of third-party software. The ability to edit all pages is provided only to a specific category of users (for example, administrators, or registered users). In some cases, anonymous users are allowed to edit certain web content, which is less frequent (for example, on forums - adding messages). An example of a site with an anonymous change is Wikipedia.

Homepage design

[edit]

Usability experts, including Jakob Nielsen and Kyle Soucy, have often emphasised homepage design for website success and asserted that the homepage is the most important page on a website.[21] Nielsen, Jakob; Tahir, Marie (October 2001), Homepage Usability: 50 Websites Deconstructed, New Riders Publishing, ISBN 978-0-7357-1102-0[22][23] However practitioners into the 2000s were starting to find that a growing number of website traffic was bypassing the homepage, going directly to internal content pages through search engines, e-newsletters and RSS feeds.[24] This led many practitioners to argue that homepages are less important than most people think.[25][26][27][28] Jared Spool argued in 2007 that a site's homepage was actually the least important page on a website.[29]

In 2012 and 2013, carousels (also called 'sliders' and 'rotating banners') have become an extremely popular design element on homepages, often used to showcase featured or recent content in a confined space.[30] Many practitioners argue that carousels are an ineffective design element and hurt a website's search engine optimisation and usability.[30][31][32]

Occupations

[edit]

There are two primary jobs involved in creating a website: the web designer and web developer, who often work closely together on a website.[33] The web designers are responsible for the visual aspect, which includes the layout, colouring, and typography of a web page. Web designers will also have a working knowledge of markup languages such as HTML and CSS, although the extent of their knowledge will differ from one web designer to another. Particularly in smaller organizations, one person will need the necessary skills for designing and programming the full web page, while larger organizations may have a web designer responsible for the visual aspect alone.

Further jobs which may become involved in the creation of a website include:

  • Graphic designers to create visuals for the site such as logos, layouts, and buttons
  • Internet marketing specialists to help maintain web presence through strategic solutions on targeting viewers to the site, by using marketing and promotional techniques on the internet
  • SEO writers to research and recommend the correct words to be incorporated into a particular website and make the website more accessible and found on numerous search engines
  • Internet copywriter to create the written content of the page to appeal to the targeted viewers of the site[1]
  • User experience (UX) designer incorporates aspects of user-focused design considerations which include information architecture, user-centred design, user testing, interaction design, and occasionally visual design.

Artificial intelligence and web design

[edit]

Chat GPT and other AI models are being used to write and code websites making it faster and easier to create websites. There are still discussions about the ethical implications on using artificial intelligence for design as the world becomes more familiar with using AI for time-consuming tasks used in design processes.[34]

See also

[edit]
[edit]

Notes

[edit]
  1. ^ <table>-based markup and spacer .GIF images

References

[edit]
  1. ^ a b Lester, Georgina. "Different jobs and responsibilities of various people involved in creating a website". Arts Wales UK. Retrieved 2012-03-17.
  2. ^ CPBI, Ryan Shelley. "The History of Website Design: 30 Years of Building the Web [2022 Update]". www.smamarketing.net. Retrieved 2022-10-12.
  3. ^ "Longer Biography". Retrieved 2012-03-16.
  4. ^ "Mosaic Browser" (PDF). Archived from the original (PDF) on 2013-09-02. Retrieved 2012-03-16.
  5. ^ Zwicky, E.D; Cooper, S; Chapman, D.B. (2000). Building Internet Firewalls. United States: O'Reily & Associates. p. 804. ISBN 1-56592-871-7.
  6. ^ a b c d Niederst, Jennifer (2006). Web Design In a Nutshell. United States of America: O'Reilly Media. pp. 12–14. ISBN 0-596-00987-9.
  7. ^ a b Chapman, Cameron, The Evolution of Web Design, Six Revisions, archived from the original on 30 October 2013
  8. ^ "AMO.NET America's Multimedia Online (Internet Explorer 6 PREVIEW)". amo.net. Retrieved 2020-05-27.
  9. ^ a b "W3C Markup Validation Service".
  10. ^ W3C. "Web Accessibility Initiative (WAI)".cite web: CS1 maint: numeric names: authors list (link)
  11. ^ "What is Web Design?". The Interaction Design Foundation. Retrieved 2022-10-12.
  12. ^ THORLACIUS, LISBETH (2007). "The Role of Aesthetics in Web Design". Nordicom Review. 28 (28): 63–76. doi:10.1515/nor-2017-0201. S2CID 146649056.
  13. ^ "What is a Web Designer? (2022 Guide)". BrainStation®. Retrieved 2022-10-28.
  14. ^ Castañeda, J.A Francisco; Muñoz-Leiva, Teodoro Luque (2007). "Web Acceptance Model (WAM): Moderating effects of user experience". Information & Management. 44 (4): 384–396. doi:10.1016/j.im.2007.02.003.
  15. ^ "Building a resilient frontend using progressive enhancement". GOV.UK. Retrieved 27 October 2021.
  16. ^ "Rolling out mobile-first indexing". Official Google Webmaster Central Blog. Retrieved 2018-06-09.
  17. ^ Stone, John (2009-11-16). "20 Do's and Don'ts of Effective Web Typography". Retrieved 2012-03-19.
  18. ^ World Wide Web Consortium: Understanding Web Content Accessibility Guidelines 2.2.2: Pause, Stop, Hide
  19. ^ W3C QA. "My Web site is standard! And yours?". Retrieved 2012-03-21.cite web: CS1 maint: numeric names: authors list (link)
  20. ^ Christensen, Mathias Biilmann (2015-11-16). "Static Website Generators Reviewed: Jekyll, Middleman, Roots, Hugo". Smashing Magazine. Retrieved 2016-10-26.
  21. ^ Soucy, Kyle, Is Your Homepage Doing What It Should?, Usable Interface, archived from the original on 8 June 2012
  22. ^ Nielsen, Jakob (10 November 2003), The Ten Most Violated Homepage Design Guidelines, Nielsen Norman Group, archived from the original on 5 October 2013
  23. ^ Knight, Kayla (20 August 2009), Essential Tips for Designing an Effective Homepage, Six Revisions, archived from the original on 21 August 2013
  24. ^ Spool, Jared (29 September 2005), Is Home Page Design Relevant Anymore?, User Interface Engineering, archived from the original on 16 September 2013
  25. ^ Chapman, Cameron (15 September 2010), 10 Usability Tips Based on Research Studies, Six Revisions, archived from the original on 2 September 2013
  26. ^ Gócza, Zoltán, Myth #17: The homepage is your most important page, archived from the original on 2 June 2013
  27. ^ McGovern, Gerry (18 April 2010), The decline of the homepage, archived from the original on 24 May 2013
  28. ^ Porter, Joshua (24 April 2006), Prioritizing Design Time: A Long Tail Approach, User Interface Engineering, archived from the original on 14 May 2013
  29. ^ Spool, Jared (6 August 2007), Usability Tools Podcast: Home Page Design, archived from the original on 29 April 2013
  30. ^ a b Messner, Katie (22 April 2013), Image Carousels: Getting Control of the Merry-Go-Round, Usability.gov, archived from the original on 10 October 2013
  31. ^ Jones, Harrison (19 June 2013), Homepage Sliders: Bad For SEO, Bad For Usability, archived from the original on 22 November 2013
  32. ^ Laja, Peep (8 June 2019), Image Carousels and Sliders? Don't Use Them. (Here's why.), CXL, archived from the original on 10 December 2019
  33. ^ Oleksy, Walter (2001). Careers in Web Design. New York: The Rosen Publishing Group, Inc. pp. 9–11. ISBN 978-0-8239-3191-0.
  34. ^ Visser, Larno, et al. ChatGPT for Web Design : Create Amazing Websites. [First edition]., PACKT Publishing, 2023.
[edit]

 

 

World Wide Web
Abbreviation WWW
Year started 1989; 36 years ago (1989) by Tim Berners-Lee
Organization
  • CERN (1989–1994)
  • W3C (1994–current)
A web page from Wikipedia displayed in Google Chrome

The World Wide Web (WWW or simply the Web) is an information system that enables content sharing over the Internet through user-friendly ways meant to appeal to users beyond IT specialists and hobbyists.[1] It allows documents and other web resources to be accessed over the Internet according to specific rules of the Hypertext Transfer Protocol (HTTP).[2]

The Web was invented by English computer scientist Tim Berners-Lee while at CERN in 1989 and opened to the public in 1993. It was conceived as a "universal linked information system".[3][4][5] Documents and other media content are made available to the network through web servers and can be accessed by programs such as web browsers. Servers and resources on the World Wide Web are identified and located through character strings called uniform resource locators (URLs).

The original and still very common document type is a web page formatted in Hypertext Markup Language (HTML). This markup language supports plain text, images, embedded video and audio contents, and scripts (short programs) that implement complex user interaction. The HTML language also supports hyperlinks (embedded URLs) which provide immediate access to other web resources. Web navigation, or web surfing, is the common practice of following such hyperlinks across multiple websites. Web applications are web pages that function as application software. The information in the Web is transferred across the Internet using HTTP. Multiple web resources with a common theme and usually a common domain name make up a website. A single web server may provide multiple websites, while some websites, especially the most popular ones, may be provided by multiple servers. Website content is provided by a myriad of companies, organizations, government agencies, and individual users; and comprises an enormous amount of educational, entertainment, commercial, and government information.

The Web has become the world's dominant information systems platform.[6][7][8][9] It is the primary tool that billions of people worldwide use to interact with the Internet.[2]

History

[edit]
This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server.

The Web was invented by English computer scientist Tim Berners-Lee while working at CERN.[10][11] He was motivated by the problem of storing, updating, and finding documents and data files in that large and constantly changing organization, as well as distributing them to collaborators outside CERN. In his design, Berners-Lee dismissed the common tree structure approach, used for instance in the existing CERNDOC documentation system and in the Unix filesystem, as well as approaches that relied in tagging files with keywords, as in the VAX/NOTES system. Instead he adopted concepts he had put into practice with his private ENQUIRE system (1980) built at CERN. When he became aware of Ted Nelson's hypertext model (1965), in which documents can be linked in unconstrained ways through hyperlinks associated with "hot spots" embedded in the text, it helped to confirm the validity of his concept.[12][13]

The historic World Wide Web logo, designed by Robert Cailliau. Currently, there is no widely accepted logo in use for the WWW.

The model was later popularized by Apple's HyperCard system. Unlike Hypercard, Berners-Lee's new system from the outset was meant to support links between multiple databases on independent computers, and to allow simultaneous access by many users from any computer on the Internet. He also specified that the system should eventually handle other media besides text, such as graphics, speech, and video. Links could refer to mutable data files, or even fire up programs on their server computer. He also conceived "gateways" that would allow access through the new system to documents organized in other ways (such as traditional computer file systems or the Usenet). Finally, he insisted that the system should be decentralized, without any central control or coordination over the creation of links.[4][14][10][11]

Berners-Lee submitted a proposal to CERN in May 1989, without giving the system a name.[4] He got a working system implemented by the end of 1990, including a browser called WorldWideWeb (which became the name of the project and of the network) and an HTTP server running at CERN. As part of that development he defined the first version of the HTTP protocol, the basic URL syntax, and implicitly made HTML the primary document format.[15] The technology was released outside CERN to other research institutions starting in January 1991, and then to the whole Internet on 23 August 1991. The Web was a success at CERN, and began to spread to other scientific and academic institutions. Within the next two years, there were 50 websites created.[16][17]

CERN made the Web protocol and code available royalty free in 1993, enabling its widespread use.[18][19] After the NCSA released the Mosaic web browser later that year, the Web's popularity grew rapidly as thousands of websites sprang up in less than a year.[20][21] Mosaic was a graphical browser that could display inline images and submit forms that were processed by the HTTPd server.[22][23] Marc Andreessen and Jim Clark founded Netscape the following year and released the Navigator browser, which introduced Java and JavaScript to the Web. It quickly became the dominant browser. Netscape became a public company in 1995 which triggered a frenzy for the Web and started the dot-com bubble.[24] Microsoft responded by developing its own browser, Internet Explorer, starting the browser wars. By bundling it with Windows, it became the dominant browser for 14 years.[25]

Berners-Lee founded the World Wide Web Consortium (W3C) which created XML in 1996 and recommended replacing HTML with stricter XHTML.[26] In the meantime, developers began exploiting an IE feature called XMLHttpRequest to make Ajax applications and launched the Web 2.0 revolution. Mozilla, Opera, and Apple rejected XHTML and created the WHATWG which developed HTML5.[27] In 2009, the W3C conceded and abandoned XHTML.[28] In 2019, it ceded control of the HTML specification to the WHATWG.[29]

The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet.[30][31][32][9]

Nomenclature

[edit]

Tim Berners-Lee states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens.[33] Nonetheless, it is often called simply the Web, and also often the web; see Capitalization of Internet for details. In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wÇŽng (万维网), which satisfies www and literally means "10,000-dimensional net", a translation that reflects the design concept and proliferation of the World Wide Web.

Use of the www prefix has been declining, especially when web applications sought to brand their domain names and make them easily pronounceable. As the mobile Web grew in popularity,[citation needed] services like Gmail.com, Outlook.com, Myspace.com, Facebook.com and Twitter.com are most often mentioned without adding "www." (or, indeed, ".com") to the domain.[34]

In English, www is usually read as double-u double-u double-u.[35] Some users pronounce it dub-dub-dub, particularly in New Zealand.[36] Stephen Fry, in his "Podgrams" series of podcasts, pronounces it wuh wuh wuh.[37] The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for".[38]

Function

[edit]
The World Wide Web functions as an application layer protocol that is run "on top of" (figuratively) the Internet, helping to make it more functional. The advent of the Mosaic web browser helped to make the web much more usable, to include the display of images and moving images (GIFs).

The terms Internet and World Wide Web are often used without much distinction. However, the two terms do not mean the same thing. The Internet is a global system of computer networks interconnected through telecommunications and optical networking. In contrast, the World Wide Web is a global collection of documents and other resources, linked by hyperlinks and URIs. Web resources are accessed using HTTP or HTTPS, which are application-level Internet protocols that use the Internet transport protocols.[2]

Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser or by following a hyperlink to that page or resource. The web browser then initiates a series of background communication messages to fetch and display the requested page. In the 1990s, using a browser to view web pages—and to move from one web page to another through hyperlinks—came to be known as 'browsing,' 'web surfing' (after channel surfing), or 'navigating the Web'. Early studies of this new behaviour investigated user patterns in using web browsers. One study, for example, found five user patterns: exploratory surfing, window surfing, evolved surfing, bounded navigation and targeted navigation.[39]

The following example demonstrates the functioning of a web browser when accessing a page at the URL http://example.org/home.html. The browser resolves the server name of the URL (example.org) into an Internet Protocol address using the globally distributed Domain Name System (DNS). This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that address. It requests service from a specific TCP port number that is well known for the HTTP service so that the receiving host can distinguish an HTTP request from other network protocols it may be servicing. HTTP normally uses port number 80 and for HTTPS it normally uses port number 443. The content of the HTTP request can be as simple as two lines of text:

GET /home.html HTTP/1.1
Host: example.org

The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the web server can fulfil the request it sends an HTTP response back to the browser indicating success:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

followed by the content of the requested page. Hypertext Markup Language (HTML) for a basic web page might look like this:

<html>
  <head>
    <title>Example.org – The World Wide Web</title>
  </head>
  <body>
    <p>The World Wide Web, abbreviated as WWW and commonly known ...</p>
  </body>
</html>

The web browser parses the HTML and interprets the markup (<title>, <p> for paragraph, and such) that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources.

HTML

[edit]

Hypertext Markup Language (HTML) is the standard markup language for creating web pages and web applications. With Cascading Style Sheets (CSS) and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web.[40]

Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.

HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as <img /> and <input /> directly introduce content into the page. Other tags such as <p> surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page.

HTML can embed programs written in a scripting language such as JavaScript, which affects the behaviour and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium (W3C), maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML since 1997.[41]

Linking

[edit]

Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like this: <a href="http://example.org/home.html">Example.org Homepage</a>.

Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks

Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.[42]

The hyperlink structure of the web is described by the webgraph: the nodes of the web graph correspond to the web pages (or URLs) the directed edges between them to the hyperlinks. Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot, and the hyperlinks affected by it are often called "dead" links. The ephemeral nature of the Web has prompted many efforts to archive websites. The Internet Archive, active since 1996, is the best known of such efforts.

WWW prefix

[edit]

Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. The hostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a Usenet news server. These hostnames appear as Domain Name System (DNS) or subdomain names, as in www.example.com. The use of www is not required by any technical or policy standard and many websites do not use it; the first web server was nxoc01.cern.ch.[43] According to Paolo Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page; however the DNS records were never switched, and the practice of prepending www to an institution's website domain name was subsequently copied.[44][better source needed] Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently[as of?], only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root.[45][dubiousdiscuss]

When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering "microsoft" may be transformed to http://www.microsoft.com/ and "openoffice" to http://www.openoffice.org. This feature started appearing in early versions of Firefox, when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx.[46] [unreliable source?] It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.[47]

Scheme specifiers

[edit]

The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential data, such as passwords or banking information. Web browsers usually automatically prepend http:// to user-entered URIs, if omitted.

Pages

[edit]
A screenshot of the home page of Wikimedia Commons

A web page (also written as webpage) is a document that is suitable for the World Wide Web and web browsers. A web browser displays a web page on a monitor or mobile device.

The term web page usually refers to what is visible, but may also refer to the contents of the computer file itself, which is usually a text file containing hypertext written in HTML or a comparable markup language. Typical web pages provide hypertext for browsing to other web pages via hyperlinks, often referred to as links. Web browsers will frequently have to access multiple web resource elements, such as reading style sheets, scripts, and images, while presenting each web page.

On a network, a web browser can retrieve a web page from a remote web server. The web server may restrict access to a private network such as a corporate intranet. The web browser uses the Hypertext Transfer Protocol (HTTP) to make such requests to the web server.

A static web page is delivered exactly as stored, as web content in the web server's file system. In contrast, a dynamic web page is generated by a web application, usually driven by server-side software. Dynamic web pages are used when each user may require completely different information, for example, bank websites, web email etc.

Static page

[edit]

A static web page (sometimes called a flat page/stationary page) is a web page that is delivered to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application.

Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so.

Dynamic pages

[edit]
Dynamic web page: example of server-side scripting (PHP and MySQL)

A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, including the setting up of more client-side processing.

A client-side dynamic web page processes the web page using JavaScript running in the browser. JavaScript programs can interact with the document via Document Object Model, or DOM, to query page state and alter it. The same client-side techniques can then dynamically update or change the DOM in the same way.

A dynamic web page is then reloaded by the user or by a computer program to change some variable content. The updating information could come from the server, or from changes made to that page's DOM. This may or may not truncate the browsing history or create a saved version to go back to, but a dynamic web page update using Ajax technologies will neither create a page to go back to nor truncate the web browsing history forward of the displayed page. Using Ajax technologies the end user gets one dynamic page managed as a single page in the web browser while the actual web content rendered on that page can vary. The Ajax engine sits only on the browser requesting parts of its DOM, the DOM, for its client, from an application server.

Dynamic HTML, or DHTML, is the umbrella term for technologies and methods used to create web pages that are not static web pages, though it has fallen out of common use since the popularization of AJAX, a term which is now itself rarely used.[citation needed] Client-side-scripting, server-side scripting, or a combination of these make for the dynamic web experience in a browser.

JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages.[48] The standardised version is ECMAScript.[48] To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax (asynchronous JavaScript and XML). Client-side script is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on elapsed time. The server's responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is retrieved. Web pages may also regularly poll the server to check whether new information is available.[49]

Website

[edit]
The usap.gov website

A website[50] is a collection of related web resources including web pages, multimedia content, typically identified with a common domain name, and published on at least one web server. Notable examples are wikipedia.org, google.com, and amazon.com.

A website may be accessible via a public Internet Protocol (IP) network, such as the Internet, or a private local area network (LAN), by referencing a uniform resource locator (URL) that identifies the site.

Websites can have many functions and can be used in various fashions; a website can be a personal website, a corporate website for a company, a government website, an organization website, etc. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education. All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet.

Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). They may incorporate elements from other websites with suitable markup anchors. Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.

Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content. Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time price quotations for different types of markets, as well as sites providing various other services. End users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs.

Browser

[edit]

A web browser (commonly referred to as a browser) is a software user agent for accessing information on the World Wide Web. To connect to a website's server and display its pages, a user needs to have a web browser program. This is the program that the user runs to download, format, and display a web page on the user's computer.

In addition to allowing users to find, display, and move between web pages, a web browser will usually have features like keeping bookmarks, recording history, managing cookies (see below), and home pages and may have facilities for recording passwords for logging into websites.

The most popular browsers are Chrome, Safari, Edge, Samsung Internet and Firefox.[51]

Server

[edit]
The inside and front of a Dell PowerEdge web server, a computer designed for rack mounting

A Web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols.

The primary function of a web server is to store, process and deliver web pages to clients.[52] The communication between client and server takes place using the Hypertext Transfer Protocol (HTTP). Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to the text content.

Multiple web servers may be used for a high traffic website; here, Dell servers are installed together to be used for the Wikimedia Foundation.

A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the webserver is implemented.

While the primary function is to serve content, full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files.

Many generic web servers also support server-side scripting using Active Server Pages (ASP), PHP (Hypertext Preprocessor), or other scripting languages. This means that the behaviour of the webserver can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically ("on-the-fly") as opposed to returning static documents. The former is primarily used for retrieving or modifying information from databases. The latter is typically much faster and more easily cached but cannot deliver dynamic content.

Web servers can also frequently be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring or administering the device in question. This usually means that no additional software has to be installed on the client computer since only a web browser is required (which now is included with most operating systems).

Optical Networking

[edit]

Optical networking is a sophisticated infrastructure that utilizes optical fiber to transmit data over long distances, connecting countries, cities, and even private residences. The technology uses optical microsystems like tunable lasers, filters, attenuators, switches, and wavelength-selective switches to manage and operate these networks.[53][54]

The large quantity of optical fiber installed throughout the world at the end of the twentieth century set the foundation of the Internet as it’s used today. The information highway relies heavily on optical networking, a method of sending messages encoded in light to relay information in various telecommunication networks.[55]

The Advanced Research Projects Agency Network (ARPANET) was one of the first iterations of the Internet, created in collaboration with universities and researchers 1969.[56][57][58][59] However, access to the ARPANET was limited to researchers, and in 1985, the National Science Foundation founded the National Science Foundation Network (NSFNET), a program that provided supercomputer access to researchers.[59]

Limited public access to the Internet led to pressure from consumers and corporations to privatize the network. In 1993, the US passed the National Information Infrastructure Act, which dictated that the National Science Foundation must hand over control of the optical capabilities to commercial operators.[60][61]

The privatization of the Internet and the release of the World Wide Web to the public in 1993 led to an increased demand for Internet capabilities. This spurred developers to seek solutions to reduce the time and cost of laying new fiber and increase the amount of information that can be sent on a single fiber, in order to meet the growing needs of the public.[62][63][64][65]

In 1994, Pirelli S.p.A.'s optical components division introduced a wavelength-division multiplexing (WDM) system to meet growing demand for increased data transmission. This four-channel WDM technology allowed more information to be sent simultaneously over a single optical fiber, effectively boosting network capacity.[66][67]

Pirelli wasn't the only company that developed a WDM system; another company, the Ciena Corporation (Ciena), created its own technology to transmit data more efficiently. David Huber, an optical networking engineer and entrepreneur Kevin Kimberlin founded Ciena in 1992.[68][69][70] Drawing on laser technology from Gordon Gould and William Culver of Optelecom, Inc., the company focused on utilizing optical amplifiers to transmit data via light.[71][72][73] Under chief executive officer Pat Nettles, Ciena developed a dual-stage optical amplifier for dense wavelength-division multiplexing (DWDM), patented in 1997 and deployed on the Sprint network in 1996.[74][75][76][77][78]

[edit]

An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers.

Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with. Without such a mechanism, the site would not know whether to send a page containing sensitive information or require the user to authenticate themselves by logging in. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples).[79]

Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories – a potential privacy concern that prompted European[80] and U.S. lawmakers to take action in 2011.[81][82] European law requires that all websites targeting European Union member states gain "informed consent" from users before storing non-essential cookies on their device.

Google Project Zero researcher Jann Horn describes ways cookies can be read by intermediaries, like Wi-Fi hotspot providers. When in such circumstances, he recommends using the browser in private browsing mode (widely known as Incognito mode in Google Chrome).[83]

Search engine

[edit]
The results of a search for the term "lunar eclipse" in a web-based image search engine

A web search engine or Internet search engine is a software system that is designed to carry out web search (Internet search), which means to search the World Wide Web in a systematic way for particular information specified in a web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Internet content that is not capable of being searched by a web search engine is generally described as the deep web.

In 1990, Archie, the world’s first search engine, was released. The technology was originally an index of File Transfer Protocol (FTP) sites, which was a method for moving files between a client and a server network.[84][85] This early search tool was superseded by more advanced engines like Yahoo! in 1995 and Google in 1998.[86][87]

Deep web

[edit]
Deep web diagram
Deep web vs surface web
Surface Web & Deep Web

The deep web,[88] invisible web,[89] or hidden web[90] are parts of the World Wide Web whose contents are not indexed by standard web search engines. The opposite term to the deep web is the surface web, which is accessible to anyone using the Internet.[91] Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term.[92]

The content of the deep web is hidden behind HTTP forms,[93][94] and includes many very common uses such as web mail, online banking, and services that users must pay for, and which is protected by a paywall, such as video on demand, some online magazines and newspapers, among others.

The content of the deep web can be located and accessed by a direct URL or IP address and may require a password or other security access past the public website page.

Caching

[edit]

A web cache is a server computer located either on the public Internet or within an enterprise that stores recently accessed web pages to improve response time for users when the same content is requested within a certain time after the original request. Most web browsers also implement a browser cache by writing recently obtained data to a local data storage device. HTTP requests by a browser may ask only for data that has changed since the last access. Web pages and resources may contain expiration information to control caching to secure sensitive data, such as in online banking, or to facilitate frequently updated sites, such as news media. Even sites with highly dynamic content may permit basic resources to be refreshed only occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. Enterprise firewalls often cache Web resources requested by one user for the benefit of many users. Some search engines store cached content of frequently accessed websites.

Security

[edit]

For criminals, the Web has become a venue to spread malware and engage in a range of cybercrime, including (but not limited to) identity theft, fraud, espionage, and intelligence gathering.[95] Web-based vulnerabilities now outnumber traditional computer security concerns,[96][97] and as measured by Google, about one in ten web pages may contain malicious code.[98] Most web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia.[99] The most common of all malware threats is SQL injection attacks against websites.[100] Through HTML and URIs, the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript[101] and were exacerbated to some degree by Web 2.0 and Ajax web design that favours the use of scripts.[102] In one 2007 estimate, 70% of all websites are open to XSS attacks on their users.[103] Phishing is another common threat to the Web. In February 2013, RSA (the security division of EMC) estimated the global losses from phishing at $1.5 billion in 2012.[104] Two of the well-known phishing methods are Covert Redirect and Open Redirect.

Proposed solutions vary. Large security companies like McAfee already design governance and compliance suites to meet post-9/11 regulations,[105] and some, like Finjan Holdings have recommended active real-time inspection of programming code and all content regardless of its source.[95] Some have argued that for enterprises to see Web security as a business opportunity rather than a cost centre,[106] while others call for "ubiquitous, always-on digital rights management" enforced in the infrastructure to replace the hundreds of companies that secure data and networks.[107] Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.[108]

Privacy

[edit]

Every time a client requests a web page, the server can identify the request's IP address. Web servers usually log IP addresses in a log file. Also, unless set not to do so, most web browsers record requested web pages in a viewable history feature, and usually cache much of the content locally. Unless the server-browser communication uses HTTPS encryption, web requests and responses travel in plain text across the Internet and can be viewed, recorded, and cached by intermediate systems. Another way to hide personally identifiable information is by using a virtual private network. A VPN encrypts traffic between the client and VPN server, and masks the original IP address, lowering the chance of user identification.

When a web page asks for, and the user supplies, personally identifiable information—such as their real name, address, e-mail address, etc. web-based entities can associate current web traffic with that individual. If the website uses HTTP cookies, username, and password authentication, or other tracking techniques, it can relate other web visits, before and after, to the identifiable information provided. In this way, a web-based organization can develop and build a profile of the individual people who use its site or sites. It may be able to build a record for an individual that includes information about their leisure activities, their shopping interests, their profession, and other aspects of their demographic profile. These profiles are of potential interest to marketers, advertisers, and others. Depending on the website's terms and conditions and the local laws that apply information from these profiles may be sold, shared, or passed to other organizations without the user being informed. For many ordinary people, this means little more than some unexpected emails in their inbox or some uncannily relevant advertising on a future web page. For others, it can mean that time spent indulging an unusual interest can result in a deluge of further targeted marketing that may be unwelcome. Law enforcement, counterterrorism, and espionage agencies can also identify, target, and track individuals based on their interests or proclivities on the Web.

Social networking sites usually try to get users to use their real names, interests, and locations, rather than pseudonyms, as their executives believe that this makes the social networking experience more engaging for users. On the other hand, uploaded photographs or unguarded statements can be identified to an individual, who may regret this exposure. Employers, schools, parents, and other relatives may be influenced by aspects of social networking profiles, such as text posts or digital photos, that the posting individual did not intend for these audiences. Online bullies may make use of personal information to harass or stalk users. Modern social networking websites allow fine-grained control of the privacy settings for each posting, but these can be complex and not easy to find or use, especially for beginners.[109] Photographs and videos posted onto websites have caused particular problems, as they can add a person's face to an online profile. With modern and potential facial recognition technology, it may then be possible to relate that face with other, previously anonymous, images, events, and scenarios that have been imaged elsewhere. Due to image caching, mirroring, and copying, it is difficult to remove an image from the World Wide Web.

Standards

[edit]

Web standards include many interdependent standards and specifications, some of which govern aspects of the Internet, not just the World Wide Web. Even when not web-focused, such standards directly or indirectly affect the development and administration of websites and web services. Considerations include the interoperability, accessibility and usability of web pages and web sites.

Web standards, in the broader sense, consist of the following:

Web standards are not fixed sets of rules but are constantly evolving sets of finalized technical specifications of web technologies.[116] Web standards are developed by standards organizations—groups of interested and often competing parties chartered with the task of standardization—not technologies developed and declared to be a standard by a single individual or company. It is crucial to distinguish those specifications that are under development from the ones that already reached the final development status (in the case of W3C specifications, the highest maturity level).

Accessibility

[edit]

There are methods for accessing the Web in alternative mediums and formats to facilitate use by individuals with disabilities. These disabilities may be visual, auditory, physical, speech-related, cognitive, neurological, or some combination. Accessibility features also help people with temporary disabilities, like a broken arm, or ageing users as their abilities change.[117] The Web is receiving information as well as providing information and interacting with society. The World Wide Web Consortium claims that it is essential that the Web be accessible, so it can provide equal access and equal opportunity to people with disabilities.[118] Tim Berners-Lee once noted, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."[117] Many countries regulate web accessibility as a requirement for websites.[119] International co-operation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.[117][120]

Internationalisation

[edit]
A global map of the Web Index for countries in 2014

The W3C Internationalisation Activity assures that web technology works in all languages, scripts, and cultures.[121] Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character map.[122] Originally RFC 3986 allowed resources to be identified by URI in a subset of US-ASCII.

RFC 3987 allows more characters—any character in the Universal Character Set—and now a resource can be identified by IRI in any language.[123]

See also

[edit]

References

[edit]
  1. ^ Wright, Edmund, ed. (2006). The Desk Encyclopedia of World History. New York: Oxford University Press. p. 312. ISBN 978-0-7394-7809-7.
  2. ^ a b c "What is the difference between the Web and the Internet?". W3C Help and FAQ. W3C. 2009. Archived from the original on 9 July 2015. Retrieved 16 July 2015.
  3. ^ "World Wide Web (WWW) launches in the public domain | April 30, 1993". HISTORY. Retrieved 21 January 2025.
  4. ^ a b c Berners-Lee, Tim. "Information Management: A Proposal". w3.org. The World Wide Web Consortium. Archived from the original on 1 April 2010. Retrieved 12 February 2022.
  5. ^ "The World's First Web Site". HISTORY. 30 August 2018. Archived from the original on 19 August 2023. Retrieved 19 August 2023.
  6. ^ Bleigh, Michael (16 May 2014). "The Once And Future Web Platform". TechCrunch. Archived from the original on 5 December 2021. Retrieved 9 March 2022.
  7. ^ "World Wide Web Timeline". Pews Research Center. 11 March 2014. Archived from the original on 29 July 2015. Retrieved 1 August 2015.
  8. ^ Dewey, Caitlin (12 March 2014). "36 Ways The Web Has Changed Us". The Washington Post. Archived from the original on 9 September 2015. Retrieved 1 August 2015.
  9. ^ a b "Internet Live Stats". internetlivestats.com. Archived from the original on 2 July 2015. Retrieved 1 August 2015.
  10. ^ a b Quittner, Joshua (29 March 1999). "Network Designer Tim Berners-Lee". Time Magazine. Archived from the original on 15 August 2007. Retrieved 17 May 2010. He wove the World Wide Web and created a mass medium for the 21st century. The World Wide Web is Berners-Lee's alone. He designed it. He set it loose it on the world. And he more than anyone else has fought to keep it an open, non-proprietary and free.[page needed]
  11. ^ a b McPherson, Stephanie Sammartino (2009). Tim Berners-Lee: Inventor of the World Wide Web. Twenty-First Century Books. ISBN 978-0-8225-7273-2.
  12. ^ Rutter, Dorian (2005). From Diversity to Convergence: British Computer Networks and the Internet, 1970-1995 (PDF) (Computer Science thesis). The University of Warwick. Archived (PDF) from the original on 10 October 2022. Retrieved 27 December 2022.
  13. ^ Tim Berners-Lee (1999). Weaving the Web. Internet Archive. HarperSanFrancisco. pp. 5–6. ISBN 978-0-06-251586-5.
  14. ^ Berners-Lee, T.; Cailliau, R.; Groff, J.-F.; Pollermann, B. (1992). "World-Wide Web: The Information Universe". Electron. Netw. Res. Appl. Policy. 2: 52–58. doi:10.1108/eb047254. ISSN 1066-2243. Archived from the original on 27 December 2022. Retrieved 27 December 2022.
  15. ^ W3 (1991) Re: Qualifiers on Hypertext links Archived 7 December 2021 at the Wayback Machine
  16. ^ Hopgood, Bob. "History of the Web". w3.org. The World Wide Web Consortium. Archived from the original on 21 March 2022. Retrieved 12 February 2022.
  17. ^ "A short history of the Web". CERN. Archived from the original on 17 April 2022. Retrieved 15 April 2022.
  18. ^ "Software release of WWW into public domain". CERN Document Server. CERN. 30 January 1993. Archived from the original on 17 February 2022. Retrieved 17 February 2022.
  19. ^ "Ten Years Public Domain for the Original Web Software". Tenyears-www.web.cern.ch. 30 April 2003. Archived from the original on 13 August 2009. Retrieved 27 July 2009.
  20. ^ Calore, Michael (22 April 2010). "April 22, 1993: Mosaic Browser Lights Up Web With Color, Creativity". Wired. Archived from the original on 24 April 2018. Retrieved 12 February 2022.
  21. ^ Couldry, Nick (2012). Media, Society, World: Social Theory and Digital Media Practice. London: Polity Press. p. 2. ISBN 9780745639208. Archived from the original on 27 February 2024. Retrieved 11 December 2020.
  22. ^ Hoffman, Jay (21 April 1993). "The Origin of the IMG Tag". The History of the Web. Archived from the original on 13 February 2022. Retrieved 13 February 2022.
  23. ^ Clarke, Roger. "The Birth of Web Commerce". Roger Clarke's Web-Site. XAMAX. Archived from the original on 15 February 2022. Retrieved 15 February 2022.
  24. ^ McCullough, Brian. "20 YEARS ON: WHY NETSCAPE'S IPO WAS THE "BIG BANG" OF THE INTERNET ERA". www.internethistorypodcast.com. INTERNET HISTORY PODCAST. Archived from the original on 12 February 2022. Retrieved 12 February 2022.
  25. ^ Calore, Michael (28 September 2009). "Sept. 28, 1998: Internet Explorer Leaves Netscape in Its Wake". Wired. Archived from the original on 30 November 2021. Retrieved 14 February 2022.
  26. ^ Daly, Janet (26 January 2000). "World Wide Web Consortium Issues XHTML 1.0 as a Recommendation". W3C. Archived from the original on 20 June 2021. Retrieved 8 March 2022.
  27. ^ Hickson, Ian. "WHAT open mailing list announcement". whatwg.org. WHATWG. Archived from the original on 8 March 2022. Retrieved 16 February 2022.
  28. ^ Shankland, Stephen (9 July 2009). "An epitaph for the Web standard, XHTML 2". CNet. Archived from the original on 16 February 2022. Retrieved 17 February 2022.
  29. ^ "Memorandum of Understanding Between W3C and WHATWG". W3C. Archived from the original on 29 May 2019. Retrieved 16 February 2022.
  30. ^ In, Lee (30 June 2012). Electronic Commerce Management for Business Activities and Global Enterprises: Competitive Advantages: Competitive Advantages. IGI Global. ISBN 978-1-4666-1801-5. Archived from the original on 21 April 2024. Retrieved 27 September 2020.
  31. ^ Misiroglu, Gina (26 March 2015). American Countercultures: An Encyclopedia of Nonconformists, Alternative Lifestyles, and Radical Ideas in U.S. History: An Encyclopedia of Nonconformists, Alternative Lifestyles, and Radical Ideas in U.S. History. Routledge. ISBN 978-1-317-47729-7. Archived from the original on 21 April 2024. Retrieved 27 September 2020.
  32. ^ "World Wide Web Timeline". Pew Research Center. 11 March 2014. Archived from the original on 29 July 2015. Retrieved 1 August 2015.
  33. ^ "Frequently asked questions - Spelling of WWW". W3C. Archived from the original on 2 August 2009. Retrieved 27 July 2009.
  34. ^ Castelluccio, Michael (1 October 2010). "It's not your grandfather's Internet". Strategic Finance. Institute of Management Accountants. Archived from the original on 5 March 2016. Retrieved 7 February 2016 – via The Free Library.
  35. ^ "Audible pronunciation of 'WWW'". Oxford University Press. Archived from the original on 25 May 2014. Retrieved 25 May 2014.
  36. ^ Harvey, Charlie (18 August 2015). "How we pronounce WWW in English: a detailed but unscientific survey". charlieharvey.org.uk. Archived from the original on 19 November 2022. Retrieved 19 May 2022.
  37. ^ "Stephen Fry's pronunciation of 'WWW'". Podcasts.com. Archived from the original on 4 April 2017.
  38. ^ Simonite, Tom (22 July 2008). "Help us find a better way to pronounce www". newscientist.com. New Scientist, Technology. Archived from the original on 13 March 2016. Retrieved 7 February 2016.
  39. ^ Muylle, Steve; Moenaert, Rudy; Despont, Marc (1999). "A grounded theory of World Wide Web search behaviour". Journal of Marketing Communications. 5 (3): 143. doi:10.1080/135272699345644.
  40. ^ Flanagan, David. JavaScript – The definitive guide (6 ed.). p. 1. JavaScript is part of the triad of technologies that all Web developers must learn: HTML to specify the content of web pages, CSS to specify the presentation of web pages, and JavaScript to specify the behaviour of web pages.
  41. ^ "HTML 4.0 Specification – W3C Recommendation – Conformance: requirements and recommendations". World Wide Web Consortium. 18 December 1997. Archived from the original on 5 July 2015. Retrieved 6 July 2015.
  42. ^ Berners-Lee, Tim; Cailliau, Robert (12 November 1990). "WorldWideWeb: Proposal for a HyperText Project". Archived from the original on 2 May 2015. Retrieved 12 May 2015.
  43. ^ Berners-Lee, Tim. "Frequently asked questions by the Press". W3C. Archived from the original on 2 August 2009. Retrieved 27 July 2009.
  44. ^ Palazzi, P (2011). "The Early Days of the WWW at CERN". Archived from the original on 23 July 2012.
  45. ^ Fraser, Dominic (13 May 2018). "Why a domain's root can't be a CNAME – and other tidbits about the DNS". FreeCodeCamp. Archived from the original on 21 April 2024. Retrieved 12 March 2019.
  46. ^ "automatically adding www.___.com". mozillaZine. 16 May 2003. Archived from the original on 27 June 2009. Retrieved 27 May 2009.
  47. ^ Masnick, Mike (7 July 2008). "Microsoft Patents Adding 'www.' And '.com' To Text". Techdirt. Archived from the original on 27 June 2009. Retrieved 27 May 2009.
  48. ^ a b Hamilton, Naomi (31 July 2008). "The A-Z of Programming Languages: JavaScript". Computerworld. IDG. Archived from the original on 24 May 2009. Retrieved 12 May 2009.
  49. ^ Buntin, Seth (23 September 2008). "jQuery Polling plugin". Archived from the original on 13 August 2009. Retrieved 22 August 2009.
  50. ^ "website". TheFreeDictionary.com. Archived from the original on 7 May 2018. Retrieved 2 July 2011.
  51. ^ www.similarweb.com https://www.similarweb.com/browsers/. Retrieved 15 February 2025. cite web: Missing or empty |title= (help)
  52. ^ Patrick, Killelea (2002). Web performance tuning (2nd ed.). Beijing: O'Reilly. p. 264. ISBN 978-0596001728. OCLC 49502686.
  53. ^ Liu, Xiang (20 December 2019). "Evolution of Fiber-Optic Transmission and Networking toward the 5G Era". iScience. 22: 489–506. Bibcode:2019iSci...22..489L. doi:10.1016/j.isci.2019.11.026. ISSN 2589-0042. PMC 6920305. PMID 31838439.
  54. ^ Marom, Dan M. (1 January 2008), Gianchandani, Yogesh B.; Tabata, Osamu; Zappe, Hans (eds.), "3.07 - Optical Communications", Comprehensive Microsystems, Oxford: Elsevier, pp. 219–265, doi:10.1016/b978-044452190-3.00035-5, ISBN 978-0-444-52190-3, retrieved 17 January 2025
  55. ^ Chadha, Devi (2019). Optical WDM networks: from static to elastic networks. Hoboken, NJ: Wiley-IEEE Press. ISBN 978-1-119-39326-9.
  56. ^ "The Computer History Museum, SRI International, and BBN Celebrate the 40th Anniversary of First ARPANET Transmission, Precursor to Today's Internet | SRI International". 29 March 2019. Archived from the original on 29 March 2019. Retrieved 21 January 2025.
  57. ^ Markoff, John (24 January 1993). "Building the Electronic Superhighway". The New York Times. ISSN 0362-4331. Retrieved 21 January 2025.
  58. ^ Abbate, Janet (2000). Inventing the Internet. Inside technology (3rd printing ed.). Cambridge, Mass.: MIT Press. ISBN 978-0-262-51115-5.
  59. ^ a b www.merit.edu http://web.archive.org/web/20241106150721/https://www.merit.edu/wp-content/uploads/2019/06/NSFNET_final-1.pdf. Archived from the original (PDF) on 6 November 2024. Retrieved 21 January 2025. cite web: Missing or empty |title= (help)
  60. ^ Rep. Boucher, Rick [D-VA-9 (14 September 1993). "H.R.1757 - 103rd Congress (1993-1994): National Information Infrastructure Act of 1993". www.congress.gov. Retrieved 23 January 2025.cite web: CS1 maint: numeric names: authors list (link)
  61. ^ "NSF Shapes the Internet's Evolution | NSF - National Science Foundation". new.nsf.gov. 25 July 2003. Retrieved 23 January 2025.
  62. ^ Radu, Roxana (7 March 2019), Radu, Roxana (ed.), "Privatization and Globalization of the Internet", Negotiating Internet Governance, Oxford University Press, pp. 75–112, doi:10.1093/oso/9780198833079.003.0004, ISBN 978-0-19-883307-9, retrieved 23 January 2025
  63. ^ "Birth of the Commercial Internet - NSF Impacts | NSF - National Science Foundation". new.nsf.gov. Retrieved 23 January 2025.
  64. ^ Markoff, John (3 March 1997). "Fiber-Optic Technology Draws Record Stock Value". The New York Times. ISSN 0362-4331. Retrieved 23 January 2025.
  65. ^ Paul Korzeniowski, “Record Growth Spurs Demand for Dense WDM -- Infrastructure Bandwidth Gears up for next Wave,” CommunicationsWeek, no. 666 (June 2, 1997): T.40.
  66. ^ Hecht, Jeff (1999). City of light: the story of fiber optics. The Sloan technology series. New York: Oxford University Press. ISBN 978-0-19-510818-7.
  67. ^ "Cisco to Acquire Pirelli DWDM Unit for $2.15 Billion". www.fiberopticsonline.com. Retrieved 31 January 2025.
  68. ^ Hirsch, Stacey (February 2, 2006). "Huber steps down as CEO of Broadwing". The Baltimore Sun.
  69. ^ "Dr. David Huber". History of the Internet. Retrieved 3 February 2025.
  70. ^ "Internet Commercialization History". History of the Internet. Retrieved 3 February 2025.
  71. ^ "May 17, 1993, page 76 - The Baltimore Sun at Baltimore Sun". Newspapers.com. Retrieved 3 February 2025.
  72. ^ Hall, Carla. “Inventor Beams over Laser Patents : After 30 Years, Gordon Gould Gets Credit He Deserves.” Los Angeles Times, Los Angeles Times, 17 Dec. 1987.
  73. ^ Chang, Kenneth (20 September 2005). "Gordon Gould, 85, Figure in Invention of the Laser, Dies". The New York Times. ISSN 0362-4331. Retrieved 3 February 2025.
  74. ^ Carroll, Jim (12 December 2024). "Patrick Nettles Steps Down as Executive Chair of Ciena". Converge Digest. Retrieved 3 February 2025.
  75. ^ US5696615A, Alexander, Stephen B., "Wavelength division multiplexed optical communication systems employing uniform gain optical amplifiers", issued 1997-12-09 
  76. ^ Hecht, Jeff (2004). City of light: the story of fiber optics. The Sloan technology series (Rev. and expanded ed., 1. paperback [ed.] ed.). Oxford: Oxford Univ. Press. ISBN 978-0-19-510818-7.
  77. ^ "Optica Publishing Group". opg.optica.org. Retrieved 3 February 2025.
  78. ^ "Sprint boots some users off 'Net - ProQuest". www.proquest.com. ProQuest 215944575. Retrieved 3 February 2025.
  79. ^ Vamosi, Robert (14 April 2008). "Gmail cookie stolen via Google Spreadsheets". News.cnet.com. Archived from the original on 9 December 2013. Retrieved 19 October 2017.
  80. ^ "What about the "EU Cookie Directive"?". WebCookies.org. 2013. Archived from the original on 11 October 2017. Retrieved 19 October 2017.
  81. ^ "New net rules set to make cookies crumble". BBC. 8 March 2011. Archived from the original on 10 August 2018. Retrieved 18 February 2019.
  82. ^ "Sen. Rockefeller: Get Ready for a Real Do-Not-Track Bill for Online Advertising". Adage.com. 6 May 2011. Archived from the original on 24 August 2011. Retrieved 18 February 2019.
  83. ^ Want to use my wifi? Archived 4 January 2018 at the Wayback Machine, Jann Horn accessed 5 January 2018.
  84. ^ Nguyen, Jennimai (10 September 2020). "Archie, the very first search engine, was released 30 years ago today". Mashable. Retrieved 4 February 2025.
  85. ^ "What is File Transfer Protocol (FTP) meaning". Fortinet. Retrieved 4 February 2025.
  86. ^ "Britannica Money". www.britannica.com. 4 February 2025. Retrieved 4 February 2025.
  87. ^ Clark, Andrew (1 February 2008). "How Jerry's guide to the world wide web became Yahoo". The Guardian. ISSN 0261-3077. Retrieved 4 February 2025.
  88. ^ Hamilton, Nigel (13 May 2024). "The Mechanics of a Deep Net Metasearch Engine". IADIS Digital Library: 1034–1036. ISBN 978-972-98947-0-1.
  89. ^ Devine, Jane; Egger-Sider, Francine (July 2004). "Beyond google: the invisible web in the academic library". The Journal of Academic Librarianship. 30 (4): 265–269. doi:10.1016/j.acalib.2004.04.010.
  90. ^ Raghavan, Sriram; Garcia-Molina, Hector (11–14 September 2001). "Crawling the Hidden Web". 27th International Conference on Very Large Data Bases. Archived from the original on 17 August 2019. Retrieved 18 February 2019.
  91. ^ "Surface Web". Computer Hope. Archived from the original on 5 May 2020. Retrieved 20 June 2018.
  92. ^ Wright, Alex (22 February 2009). "Exploring a 'Deep Web' That Google Can't Grasp". The New York Times. Archived from the original on 1 March 2020. Retrieved 23 February 2009.
  93. ^ Madhavan, J., Ko, D., Kot, Ł., Ganapathy, V., Rasmussen, A., & Halevy, A. (2008). Google's deep web crawl. Proceedings of the VLDB Endowment, 1(2), 1241–52.
  94. ^ Shedden, Sam (8 June 2014). "How Do You Want Me to Do It? Does It Have to Look like an Accident? – an Assassin Selling a Hit on the Net; Revealed Inside the Deep Web". Sunday Mail. Archived from the original on 1 March 2020. Retrieved 5 May 2017.
  95. ^ a b Ben-Itzhak, Yuval (18 April 2008). "Infosecurity 2008 – New defence strategy in battle against e-crime". ComputerWeekly. Reed Business Information. Archived from the original on 4 June 2008. Retrieved 20 April 2008.
  96. ^ Christey, Steve & Martin, Robert A. (22 May 2007). "Vulnerability Type Distributions in CVE (version 1.1)". MITRE Corporation. Archived from the original on 17 March 2013. Retrieved 7 June 2008.
  97. ^ "Symantec Internet Security Threat Report: Trends for July–December 2007 (Executive Summary)" (PDF). Symantec Internet Security Threat Report. XIII. Symantec Corp.: 1–2 April 2008. Archived from the original (PDF) on 25 June 2008. Retrieved 11 May 2008.
  98. ^ "Google searches web's dark side". BBC News. 11 May 2007. Archived from the original on 7 March 2008. Retrieved 26 April 2008.
  99. ^ "Security Threat Report (Q1 2008)" (PDF). Sophos. Archived (PDF) from the original on 31 December 2013. Retrieved 24 April 2008.
  100. ^ "Security threat report" (PDF). Sophos. July 2008. Archived (PDF) from the original on 31 December 2013. Retrieved 24 August 2008.
  101. ^ Jeremiah Grossman; Robert "RSnake" Hansen; Petko "pdp" D. Petkov; Anton Rager; Seth Fogie (2007). Cross Site Scripting Attacks: XSS Exploits and Defense (PDF). Syngress, Elsevier Science & Technology. pp. 68–69, 127. ISBN 978-1-59749-154-9. Archived (PDF) from the original on 15 November 2024. Retrieved 23 January 2025.
  102. ^ O'Reilly, Tim (30 September 2005). "What Is Web 2.0". O'Reilly Media. pp. 4–5. Archived from the original on 28 June 2012. Retrieved 4 June 2008. and AJAX web applications can introduce security vulnerabilities like "client-side security controls, increased attack surfaces, and new possibilities for Cross-Site Scripting (XSS)", in Ritchie, Paul (March 2007). "The security risks of AJAX/web 2.0 applications" (PDF). Infosecurity. Archived from the original (PDF) on 25 June 2008. Retrieved 6 June 2008. which cites Hayre, Jaswinder S. & Kelath, Jayasankar (22 June 2006). "Ajax Security Basics". SecurityFocus. Archived from the original on 15 May 2008. Retrieved 6 June 2008.
  103. ^ Berinato, Scott (1 January 2007). "Software Vulnerability Disclosure: The Chilling Effect". CSO. CXO Media. p. 7. Archived from the original on 18 April 2008. Retrieved 7 June 2008.
  104. ^ "2012 Global Losses From phishing Estimated At $1.5 Bn". FirstPost. 20 February 2013. Archived from the original on 21 December 2014. Retrieved 25 January 2019.
  105. ^ Prince, Brian (9 April 2008). "McAfee Governance, Risk and Compliance Business Unit". eWEEK. Ziff Davis Enterprise Holdings. Archived from the original on 21 April 2024. Retrieved 25 April 2008.
  106. ^ Preston, Rob (12 April 2008). "Down To Business: It's Past Time To Elevate The Infosec Conversation". InformationWeek. United Business Media. Archived from the original on 14 April 2008. Retrieved 25 April 2008.
  107. ^ Claburn, Thomas (6 February 2007). "RSA's Coviello Predicts Security Consolidation". InformationWeek. United Business Media. Archived from the original on 7 February 2009. Retrieved 25 April 2008.
  108. ^ Duffy Marsan, Carolyn (9 April 2008). "How the iPhone is killing the 'Net". Network World. IDG. Archived from the original on 14 April 2008. Retrieved 17 April 2008.
  109. ^ boyd, danah; Hargittai, Eszter (July 2010). "Facebook privacy settings: Who cares?". First Monday. 15 (8). doi:10.5210/fm.v15i8.3086.
  110. ^ "W3C Technical Reports and Publications". W3C. Archived from the original on 15 July 2018. Retrieved 19 January 2009.
  111. ^ "IETF RFC page". IETF. Archived from the original on 2 February 2009. Retrieved 19 January 2009.
  112. ^ "Search for World Wide Web in ISO standards". ISO. Archived from the original on 4 March 2016. Retrieved 19 January 2009.
  113. ^ "Ecma formal publications". Ecma. Archived from the original on 27 December 2017. Retrieved 19 January 2009.
  114. ^ "Unicode Technical Reports". Unicode Consortium. Archived from the original on 2 January 2022. Retrieved 19 January 2009.
  115. ^ "IANA home page". IANA. Archived from the original on 24 February 2011. Retrieved 19 January 2009.
  116. ^ Sikos, Leslie (2011). Web standards – Mastering HTML5, CSS3, and XML. Apress. ISBN 978-1-4302-4041-9. Archived from the original on 2 April 2015. Retrieved 12 March 2019.
  117. ^ a b c "Web Accessibility Initiative (WAI)". World Wide Web Consortium. Archived from the original on 2 April 2009. Retrieved 7 April 2009.
  118. ^ "Developing a Web Accessibility Business Case for Your Organization: Overview". World Wide Web Consortium. Archived from the original on 14 April 2009. Retrieved 7 April 2009.
  119. ^ "Legal and Policy Factors in Developing a Web Accessibility Business Case for Your Organization". World Wide Web Consortium. Archived from the original on 5 April 2009. Retrieved 7 April 2009.
  120. ^ "Web Content Accessibility Guidelines (WCAG) Overview". World Wide Web Consortium. Archived from the original on 1 April 2009. Retrieved 7 April 2009.
  121. ^ "Internationalization (I18n) Activity". World Wide Web Consortium. Archived from the original on 16 April 2009. Retrieved 10 April 2009.
  122. ^ Davis, Mark (5 April 2008). "Moving to Unicode 5.1". Archived from the original on 21 May 2009. Retrieved 10 April 2009.
  123. ^ "World Wide Web Consortium Supports the IETF URI Standard and IRI Proposed Standard" (Press release). World Wide Web Consortium. 26 January 2005. Archived from the original on 7 February 2009. Retrieved 10 April 2009.

Further reading

[edit]
  • Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jeckle, Mario; Lilley, Chris; Mendelsohn, Noah; Orchard, David; Walsh, Norman; Williams, Stuart (15 December 2004). "Architecture of the World Wide Web, Volume One". W3C. Version 20041215.
  • Berners-Lee, Tim (August 1996). "The World Wide Web: Past, Present and Future". W3C.
  • Brügger, Niels, ed, Web25: Histories from the first 25 years of the World Wide Web (Peter Lang, 2017).
  • Fielding, R.; Gettys, J.; Mogul, J.; Frystyk, H.; Masinter, L.; Leach, P.; Berners-Lee, T. (June 1999). "Hypertext Transfer Protocol – HTTP/1.1". Request For Comments 2616. Information Sciences Institute.
  • Niels Brügger, ed. Web History (2010) 362 pages; Historical perspective on the World Wide Web, including issues of culture, content, and preservation.
  • Polo, Luciano (2003). "World Wide Web Technology Architecture: A Conceptual Analysis". New Devices.
  • Skau, H.O. (March 1990). "The World Wide Web and Health Information". New Devices.
[edit]