HTML |
---|
Comparisons |
In web development, "tag soup" is a pejorative for syntactically or structurally incorrect HTML written for a web page. Because web browsers have historically treated structural or syntax errors in HTML leniently, there has been little pressure for web developers to follow published standards, and therefore there is a need for all browser implementations to provide mechanisms to cope with the appearance of "tag soup", accepting and correcting for invalid syntax and structure where possible.
An HTML parser (part of a web browser) that is capable of interpreting HTML-like markup even if it contains invalid syntax or structure may be called a tag soup parser. All major web browsers currently have a tag soup parser for interpreting malformed HTML, with most error-handling elements standardized.
"Tag soup" encompasses many common authoring mistakes, such as malformed HTML tags, improperly nested HTML elements, and unescaped character entities (especially ampersands (&) and less-than signs (<)).
I have used this term in my instruction for years to characterize the jumble of angle brackets acting like tags in HTML in pages that are accepted by browsers. Improper minimization, overlapping constructs ... stuff that looks like SGML markup but the creator didn't know or respect SGML rules for the HTML vocabulary. In effect a soupy collection of text and markup. [...] I've never seen the term defined anywhere.—G. Ken Holman, Re: [xml-dev] What is Tag Soup?, XML development mailing list, 11 Oct 2002.
The Markup Validation Service is a resource for web page authors to avoid creating tag soup.
"Tag soup" is a term used to denigrate various practices in web authoring. Some of these (roughly ordered from most severe to least severe) include:
<p>This is a malformed fragment of <em>HTML.</p></em>
Malformed markup is arguably the most severe problem in web authoring. However, thanks to better education and information and perhaps with some help from XHTML, the issue of malformed markup is becoming less common. Browsers, when faced with malformed markup, must guess the intended meaning of the author. They must infer closing tags where they expect them and then infer opening tags to match other closing-tags. The interpretation can vary markedly from one browser to the next.[2]
While many graphical web editors produce well-formed markup, an author writing code manually with a text-editor and then testing only in one browser can easily miss such errors. The presentation can therefore vary drastically from one browser to another as each tries to "correct" the authorʼs intent in different ways and then applies styling to those "corrections".
Invalid document structure here means only the use of attributes and elements where they do not belong. For example, placing a "cite" attribute on a "cite" element is invalid since the HTML and XHTML DTDs do not ascribe any meaning to that attribute on that element. Similarly, including a "p" element within the content of an "em" element is also invalid. With the move toward separating malformed markup from invalid markup, the problems with invalid markup have increasingly been seen as less severe. Some have begun to advocate looser content models that allow greater flexibility in authoring HTML documents (whether in HTML or XHTML). However, use of invalid markup can blur the author's intended meaning, though not as severely as malformed markup.
Many graphic web editors still produce invalid markup. Moreover, many professional web designers and authors pay little attention to issues of validity. It is common to see invalid markup in many of the sites throughout the World Wide Web.
In the early age of the web (much of the 1990s), the design of the official HTML specification became increasingly strained, compared to the desire of designers for flexibility in creating visually vibrant designs. In response to this pressure, browser makers unilaterally added new proprietary features to HTML that fell outside the standards at the time. This meant there were proprietary elements in HTML that worked in some browsers, but not in others.
To some extent, this problem was slowed by the introduction of new standards by the W3C, such as CSS, introduced in 1998, which helped to provide greater flexibility in the presentation and layout of web pages without the need for large numbers of additional HTML elements and attributes.
Moreover, in HTML 4 and XHTML 1, many elements were either superseded by a single semantic construct (such as object elements replacing proprietary applet and embed elements) or deprecated due to being presentational (such as the "s", "strike" and "u" elements).
Nevertheless, browser developers continued to introduce new elements to HTML when they perceived a need. Some browsers included tabindex attributes on any element. Developers of Apple's WebKit introduced the canvas element, a version of which was subsequently adopted by Mozilla.
In 2004, Apple, Mozilla and Opera founded the WHATWG, with the intent of creating a new version of the HTML specification which all browser behavior would match. This included changing the specification if necessary to match an existing consensus between different browsers.[3]
The canvas[4] and embed[5] elements were subsequently standardised by the WHATWG. Certain elements (including b, i and small) which were previously considered presentational and deprecated were included, but defined in a media-independent rather than visual manner.[6]
Versions of the WHATWG specification were published by the W3C as HTML5.[3]
While some of the issues of tag soup are due to shortcomings of browsers and sometimes due to a lack of information for web authors, some of the proliferation of tag soup was due to missing links in the web standards themselves. The W3C has spearheaded several efforts to address the shortcomings of web standards. As more browsers support newer revisions of standards, the pressure on web developers to use non-standard code to solve problems diminishes.
Cascading Style Sheets (CSS) provide a mechanism to specify the presentation of elements in a document without altering the markup structure of the document. Before CSS was commonplace, web developers may have resorted to some structurally invalid markup to achieve certain presentational goals – for example, including block level elements within inline elements to obtain a particular effect, or using sometimes large numbers of <font>
and other display-specific HTML tags. CSS uses style rules to accomplish these tasks while leaving the markup cleaner and simpler.
XHTML is a reformulation of the HTML language based on XML. XHTML was developed to address many of the problems associated with tag soup.
XML allows parsers to separate the process of interpreting the document syntax and its structure. In HTML and SGML, a parser needed to know certain rules about elements during parsing, such as what elements could be contained within other elements and which elements implicitly close the previous element. This is because in HTML and SGML, closing tags and even opening tags were optional on some elements. By requiring all elements to have explicit opening and closing tags, XML parsers can parse the document and produce a document tree without any knowledge of the document type. This allows parsers to be universal and very light-weight, and to be separated from the process of validating or interpreting the document.
The XML specification clearly defines that a conforming user agent (such as a web browser) must not accept a document, and not continue parsing it, if any syntactical error is encountered. Thus, a browser interpreting a web page as XHTML will refuse to display the page if it encounters a formation error. This can help ensure that when authors test XHTML code against a conforming browser they will immediately be informed of malformation problems: perhaps the most severe problem facing web browsers. When code is malformed, the intent of the author is ambiguous. Without the directives of XML, HTML browsers must use complex algorithms to infer the author's intended meaning in a wide range of cases where invalid syntax is encountered.
XML and XHTML introduce the concept of namespaces. With namespaces, authors or communities of authors can define new elements and attributes with new semantics, and intermix those within their XHTML documents. Namespaces ensure that element names from the various namespaces will not be conflated. For example, a "table" element could be defined in a new namespace with new semantics different from the HTML "table" element and the browser will be able to differentiate between the two. In providing namespaces, XHTML combined with CSS allow authoring communities to easily extend the semantic vocabulary of documents. This accommodates the use of proprietary elements so long as those elements can be presented to the intended audience through complete style sheet definitions (including aural/speech and tactile styles).
XHTML documents may be served on the web using the internet media type application/xhtml+xml
or text/html
[7] Microsoft Internet Explorer versions before 9 do not display XHTML documents served as application/xhtml+xml
. IE9 and later versions are compliant. See also the discussion of this issue in the XHTML article.
HTML5 aims to be the most complete solution to the problem of tag soup thus far while remaining as backwards- and forwards-compatible as possible. By contrast to XHTML, which departs from backwards compatibility and takes the approach that parsers should become less tolerant of badly formed markup, HTML5 acknowledges that badly formed HTML code already exists in large quantities and will probably continue to be used, and takes the view that the specification should be expanded to ensure maximum compatibility with such code.
Thus, the HTML 5 specification has altered its definition of HTML syntax both to accommodate common syntax in use today, and to explicitly describe exactly how "badly formed code" should be treated by the parser. The handling of badly formed code now has a place in the specification itself, hopefully reducing the need for future HTML parsers to implement additional, out-of-specification measures for dealing with code that it does not recognize.
Many software tools exist which can parse and attempt to correct malformed markup, among other functions.
Unlike the strict XHTML, HTML and its predecessor SGML are designed to be written by humans, and already have a significant degree of flexibility in syntax to reduce boilerplate. These differences do not make the document invalid and are therefore not tag soup. The following apply to both HTML 4 and HTML5,[9] and examples date back to the first days of HTML.[10]
<head>...</head>
can often be omitted completely.<li>...</li>
elements can be written without closing.Despite their validity, these omissions still require a special parser with a knowledge of HTML (as opposed to the more rigid XML) to parse. In addition, it is common for tools to "fix" these structures too. For example, HTML Tidy allows omitting optional tags, but defaults to not doing so.[11]
Original source: https://en.wikipedia.org/wiki/Tag soup.
Read more |