Network: Web Design: Finding an irritation bypass when in search of Marx

Jason Cranford Teague
Sunday 05 December 1999 19:02 EST
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

IT IS universally agreed that finding information on the Web can be damn irritating. It's not impossible, of course. If it was, no one would use it (and we'd be out of jobs).

Most people, both novice and experienced Net users, will start their quest for information with one of the many search engines. They type in a description of what they are seeking - say, a Marx Brothers movie clip from A Night at the Opera - so they type in "Marx Brothers", hit "search" and are whisked away to what is usually a massive list of possible Web pages that might, just might, have information that is relevant to them.

Naturally if you have that famous movie clip of Groucho trying to stuff the entire crew of the cruise ship into his cabin, then you want one of those websites placed towards the top, if not at the very top, of the search results list. Yet, you are potentially competing against thousands of other Web pages that have the key words "Marx" and "Brothers" on them, some of which might be extolling the virtues of the proletariat rather then containing the comedic genius.

So, how do you separate Karl Marx's philosophies from the Groucho Marx's shtick? Vladimir Lenin's revolution from John Lennon's "Revolution?" How do you make sure that the content you have gets found by the people who need it?

The place to start is by understanding how the search engines that your potential visitors will be using work.

Different flavours

Although the outcome is often the same - a list of search results - there are really two different types of search engines on the Web: crawlers and directories. These two methods differ primarily in the ways that they gather the data from which they create their index of sites, which is then searched.

Crawlers: Crawlers, such as AltaVista or Excite, use a program called a spider, which "crawls" through the Web, indexing pages along the way. Visitors can then search through the results that the spider finds. However, if a change is made to a Web page, the spider has to crawl through that page again before the change is detected. The World Wide Web is a really big place, so it might take a while for the spider to get back again.

Directories: Unlike the active crawlers, passive directories require website creators (or whoever wants to do it) to register a site in their index. The advantage of a directory such as Yahoo! or DMOZ, is that they are far more selective as to what content is indexed, so searches tend to be more focused and produce more accurate results. However, directories are also harder to keep up to date, especially if a site has to be checked by a human being before entry. The other great advantage of a directory is that the searcher can actually bypass the search engine, and find what they are looking for by narrowing down the subject by selecting from lists of increasingly specific topics.

Hybrid Search Engines: Several search engines, for example, Yahoo!, will allow you to search indexes created both by crawlers and directories simultaneously. This allows you to deploy the advantages of both techniques at the same time.

The parts of a search engine

Whether the search engine uses a crawler or a registration directory to get its data, they all have at least two parts in common: the index and the search software.

The Index: All of the content that gets crawled by the spider, and/or all of the entries in the directory get placed into the index. If the search engine uses a spider, then this massive database can contain every page that has been crawled, making it a carbon copy of the Web. If the search engine uses a directory, then only the titles, URLs, and descriptions of Web pages are included in the index.

The Search Software: When a visitor uses a search engine, they first enter one or more keywords. The index is then sifted through by search software which matches the key word(s) to Web pages and ranks them in order of relevance. So, how does the search software make the crucial decision as to which pages are more relevant, and thus closer to the top of the list, than others?

Ranking Web sites

Most search engines that use a crawler to produce the massive amounts of data used to search, determine relevancy by following a set of rules that stay more or less consistent across products. If someone is using the search engine to find the words "Marx Brothers", the search engine will check to see:

Which pages have these words in the

Which pages one or both of the words appear on.

How close to the top the words appear, assuming that the closer the words are to the beginning of a page the more relevant that page is.

How frequently the words appear on the page.

How close the words appear together.

And after considering all of these criteria, it produces the list of sites in order of relevancy. Well, almost.

Secret ingredients

While all of the major search engines follow this basic recipe, if all search engines worked exactly same way, then we would only need one search engine. Some crawlers index more pages than others, while many directories will use human beings to evaluate submitted websites. All search engines will put their own spin on searching to differentiate themselves from the competition.

Next week, I'll go further in depth into some of the secret ingredients that different search engines use to find your site. And then over the following weeks I'll be taking a look at how to optimise a site for searching, and some of the resources online to help you get a handle on the search engine monster.

Jason Cranford Teague is the author of 'DHTML For the World Wide Web'. If you have questions, you can find an archive of his column at Webbed Environments (www.webbedenvironments.com) or e-mail him at jason@webbedenvironments.com

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in