There is one thing that the owners of web based sites like me are bothered about. This is the capability of the Big G or Google search engine to actually crawl over the web pages that are submitted to them. We are aware of the vast coverage of the internet at the current times and the number of web based sites that are in it. How does the Big G manage to crawl over all of those web based sites? Are there special ways that they are using in order to crawl over these web pages that we might have these days? I will need some fast answers for this whole matter now.