Search engine indexing: Difference between revisions

Content deleted Content added
Meta tag indexing: Citations needed.
 
(21 intermediate revisions by 17 users not shown)
Line 1:
{{Short description|Method for data management}}
 
'''Search engine indexing''' is the collecting, [[parsing]], and storing of data to facilitate fast and accurate [[information retrieval]]. Index design incorporates interdisciplinary concepts from [[linguistics]], [[cognitive psychology]], mathematics, [[informatics]], and [[computer science]]. An alternate name for the process, in the context of [[search engine]]s designed to find [[web page]]s on the Internet, is ''[[web indexing]]''.
 
Popular search engines focus on the [[Full-text search|full-text]] indexing of online, [[Natural language processing|natural language]] documents.<ref>Clarke, C., Cormack, G.: Dynamic Inverted Indexes for a Distributed Full-Text Retrieval System. TechRep MT-95-01, University of Waterloo, February 1995.</ref> [[Media type]]s such as pictures, video, audio,<ref>{{cite journalCite web|last=Sikos |firsttitle=L.An F. |date=August 2016 |title=RDFIndustrial-poweredStrength semanticAudio videoSearch annotation tools with concept mapping to Linked Data for next-generation video indexingAlgorithm |journal=Multimedia Tools and Applications |doiurl=10http://www.1007ee.columbia.edu/s11042~dpwe/papers/Wang03-016-3705-7shazam.pdf |s2cid=254832794 |archive-url=https://ap01web.almaarchive.exlibrisgroup.comorg/viewweb/delivery20060512074748/61USOUTHAUS_INST/12165436490001831 }}{{Dead link|date=August 2023 |bot=InternetArchiveBot |fix-attempted=yes }}</ref> audio,<ref>http://www.ee.columbia.edu:80/~dpwe/papers/Wang03-shazam.pdf {{Bare| URL PDF|archive-date=March 20222006-05-12}}</ref> and graphics<ref>Charles E. Jacobs, Adam Finkelstein, David H. Salesin. [http://grail.cs.washington.edu/projects/query/mrquery.pdf Fast Multiresolution Image Querying]. Department of Computer Science and Engineering, University of Washington. 1995. Verified Dec 2006</ref> are also searchable.
 
[[Metasearch engine|Meta search engines]] reuse the indices of other services and do not store a local index whereas cache-based search engines permanently store the index along with the [[text corpus|corpus]]. Unlike full-text indices, partial-text services restrict the depth indexed to reduce index size. Larger services typically perform indexing at a predetermined time interval due to the required time and processing costs, while [[Intelligent agent|agent]]-based search engines index in [[Real time business intelligence|real time]].
Line 61 ⟶ 62:
|}
 
This index can only determine whether a word exists within a particular document, since it stores no information regarding the frequency and position of the word; it is therefore considered to be a [[booleanBoolean datatypedata type|booleanBoolean]] index. Such an index determines which documents match a query but does not rank matched documents. In some designs the index includes additional information such as the frequency of each word in each document or the positions of a word in each document.<ref>Grossman, Frieder, Goharian. [http://www.cs.clemson.edu/~juan/CPSC862/Concept-50/IR-Basics-of-Inverted-Index.pdf IR Basics of Inverted Index]. 2002. Verified Aug 2011.</ref> Position information enables the search algorithm to identify word proximity to support searching for phrases; frequency can be used to help in ranking the relevance of documents to the query. Such topics are the central research focus of [[information retrieval]].
 
The inverted index is a [[sparse matrix]], since not all words are present in each document. To reduce [[Computer Storage|computer storage]] memory requirements, it is stored differently from a two dimensional [[Array data structure|array]]. The index is similar to the [[document-term matrix|term document matrices]] employed by [[latent semantic analysis]]. The inverted index can be considered a form of a hash table. In some cases the index is a form of a [[binary tree]], which requires additional storage but may reduce the lookup time. In larger indices the architecture is typically a [[distributed hash table]].<ref>Tang, Hunqiang. [[Sandhya Dwarkadas|Dwarkadas, Sandhya]]. "Hybrid Global Local Indexing for Efficient
Line 118 ⟶ 119:
; Language ambiguity: To assist with properly ranking matching documents, many search engines collect additional information about each word, such as its [[language]] or [[lexical category]] ([[part of speech]]). These techniques are language-dependent, as the syntax varies among languages. Documents do not always clearly identify the language of the document or represent it accurately. In tokenizing the document, some search engines attempt to automatically identify the language of the document.
 
; Diverse file formats: In order to correctly identify which bytes of a document represent characters, the file format must be correctly handled. Search engines whichthat support multiple file formats must be able to correctly open and access the document and be able to tokenize the characters of the document.
 
; Faulty storage: The quality of the natural language data may not always be perfect. An unspecified number of documents, particularly on the Internet, do not closely obey proper file protocol. [[Binary data|Binary]] characters may be mistakenly encoded into various parts of a document. Without recognition of these characters and appropriate handling, the index quality or indexer performance could degrade.
Line 125 ⟶ 126:
Unlike [[literacy|literate]] humans, computers do not understand the structure of a natural language document and cannot automatically recognize words and sentences. To a computer, a document is only a sequence of bytes. Computers do not 'know' that a space character separates words in a document. Instead, humans must program the computer to identify what constitutes an individual or distinct word referred to as a token. Such a program is commonly called a [[tokenizer]] or [[parser]] or [[Lexical analysis|lexer]]. Many search engines, as well as other natural language processing software, incorporate [[Comparison of parser generators|specialized programs]] for parsing, such as [[YACC]] or [[Lex programming tool|Lex]].
 
During tokenization, the parser identifies sequences of characters whichthat represent words and other elements, such as punctuation, which are represented by numeric codes, some of which are non-printing control characters. The parser can also identify [[Entity extraction|entities]] such as [[email]] addresses, phone numbers, and [[Uniform Resource Locator|URL]]s. When identifying each token, several characteristics may be stored, such as the token's case (upper, lower, mixed, proper), language or encoding, lexical category (part of speech, like 'noun' or 'verb'), position, sentence number, sentence position, length, and line number.
 
===Language recognition===
Line 160 ⟶ 161:
Format analysis can involve quality improvement methods to avoid including 'bad information' in the index. Content can manipulate the formatting information to include additional content. Examples of abusing document formatting for [[spamdexing]]:
 
* Including hundreds or thousands of words in a section whichthat is hidden from view on the computer screen, but visible to the indexer, by use of formatting (e.g. hidden [[Span and div|"div" tag]] in [[HTML]], which may incorporate the use of [[CSS]] or [[JavaScript]] to do so).
* Setting the foreground font color of words to the same as the background color, making words hidden on the computer screen to a person viewing the document, but not hidden to the indexer.
 
===Section recognition===
Some search engines incorporate section recognition, the identification of major parts of a document, prior to tokenization. Not all the documents in a corpus read like a well-written book, divided into organized chapters and pages. Many documents on the [[Internet|web]], such as newsletters and corporate reports, contain erroneous content and side-sections whichthat do not contain primary material (that which the document is about). For example, thisarticles on the Wikipedia articlewebsite displaysdisplay a side menu with links to other web pages. Some file formats, like HTML or PDF, allow for content to be displayed in columns. Even though the content is displayed, or rendered, in different areas of the view, the raw markup content may store this information sequentially. Words that appear sequentially in the raw source content are indexed sequentially, even though these sentences and paragraphs are rendered in different parts of the computer screen. If search engines index this content as if it were normal content, the quality of the index and search quality may be degraded due to the mixed content and improper word proximity. Two primary problems are noted:
 
* Content in different sections is treated as related in the index, when in reality it is not
* Organizational ''side bar'' content is included in the index, but the side bar content does not contribute to the meaning of the document, and the index is filled with a poor representation of its documents.
 
Section analysis may require the search engine to implement the rendering logic of each document, essentially an abstract representation of the actual document, and then index the representation instead. For example, some content on the Internet is rendered via JavaScript. If the search engine does not render the page and evaluate the JavaScript within the page, it would not 'see' this content in the same way and would index the document incorrectly. Given that some search engines do not bother with rendering issues, many web page designers avoid displaying content via JavaScript or use the [https://worldwidenews.ru/2020/05/27/noscript-tag/ Noscript] tag to ensure that the web page is indexed properly. At the same time, this fact can also be [[spamdexing|exploited]] to cause the search engine indexer to 'see' different content than the viewer.
Line 178 ⟶ 179:
Meta tag indexing plays an important role in organizing and categorizing web content. Specific documents often contain embedded meta information such as author, keywords, description, and language. For HTML pages, the [[meta tag]] contains keywords which are also included in the index. Earlier Internet [[search engine technology]] would only index the keywords in the meta tags for the forward index; the full document would not be parsed. At that time full-text indexing was not as well established, nor was [[computer hardware]] able to support such technology. The design of the HTML markup language initially included support for meta tags for the very purpose of being properly and easily indexed, without requiring tokenization.<ref>Berners-Lee, T., "Hypertext Markup Language - 2.0", RFC 1866, Network Working Group, November 1995.</ref>
 
As the Internet grew through the 1990s, many [[brick and mortar business|brick-and-mortar corporations]] went 'online' and established corporate websites. The keywords used to describe webpages (many of which were corporate-oriented webpages similar to product brochures) changed from descriptive to marketing-oriented keywords designed to drive sales by placing the webpage high in the search results for specific search queries. The fact that these keywords were subjectively specified was leading to [[spamdexing]], which drove many search engines to adopt full-text indexing technologies in the 1990s. Search engine designers and companies could only place so many 'marketing keywords' into the content of a webpage before draining it of all interesting and useful information. Given that conflict of interest with the business goal of designing user-oriented websites which were 'sticky', the [[customer lifetime value]] equation was changed to incorporate more useful content into the website in hopes of retaining the visitor. In this sense, full-text indexing was more objective and increased the quality of search engine results, as it was one more step away from subjective control of search engine result placement, which in turn furthered research of full-text indexing technologies.{{cn|date=August 2025}}
 
In [[desktop search]], many solutions incorporate meta tags to provide a way for authors to further customize how the search engine will index content from various files that is not evident from the file content. Desktop search is more under the control of the user, while Internet search engines must focus more on the full text index.{{cn|date=August 2025}
 
==See also==
Line 187 ⟶ 188:
* [[Full-text search]]
* [[Information extraction]]
* [[Instant indexing]]
* [[Key Word in Context]]
* [[Selection-based search]]