Google Using Maps API Key to Extract Content

I’m working on building a LAPP (landing application) for a client and I noticed something incredibly interesting.

The landing page uses the Google Maps API to deliver an added layer of relevance on the landing page by delivering a map experience to the user. Since it’s currently in dev the page has no links to it. Also the URL is brand new. Just registered a month ago. Accidentally I entered the page URL as a Google query and lo’ and behold I got a result! The page has been indexed.

So how did this page get indexed? The only connection to Google is through the Maps API. It seems that Google is using the Maps API key to direct Googlebot to collect the site content to be indexed. This is interesting in itself but the larger questions are even more interesting.

What else can Google extract using the API and how will they use this information?



, , ,



9 responses to “Google Using Maps API Key to Extract Content”

  1. Bill Avatar

    Nice discovery. Google does find some interesting ways to extract information from pages, and uncover information that we might not expect them to look at or use.


  2. Jonathan Mendez Avatar

    Thanks Bill.
    Yes, it was one of those moments that remindes me that Google is so much smarter and doing so many more things than most folks give them credit for.


  3. Sumit Chachra Avatar

    Won’t surprise me if they crawl when they get requests from the maps api.
    But it might just be a referral leak as discussed here:
    since google usually keeps data across various of their products separately (as much is possible)


  4. Jonathan Mendez Avatar

    Hi Sumit,
    There’s no referrals that I know of. I’ve only accessed the page direct through my browser.


  5. Sumit Chachra Avatar

    did you go to a page while you were on that page?
    Maybe click a bookmark or link off that page ?
    Cause sometimes servers store referral data in their logs (which are sometimes public) and hence get crawled by google!
    But, as I said, you might just be right….. for pages under development might be best to use a robots.txt with appropriate setting.


  6. Search Engine Land: News About Search Engines & Search Marketing Avatar

    SearchCap: The Day In Search, October 24, 2007

    Below is what happened in search today, as reported on Search Engine Land and from other places across the web….


  7. pete Avatar

    “…the URL is brand new. Just registered a month ago”
    Wouldn’t it be easier to assume that google just crawl newly registered domains?


  8. Jonathan Mendez Avatar

    Hi Pete,
    I wouldn’t assume that. From what I know it usually takes 6months or longer for Google to index a brand new domain.


  9. Richard Hearne Avatar

    One interesting feature we’ve noticed over this side of the ocean is that just prior to Google introducing the new geo-targeting feature to Webmaster Console more sites were starting to appear in the ‘Pages from Ireland’ index which were neither hosted in Ireland nor sitting on .ie cTLD. One feature of some of these sites was the use of Google maps to locate their premises. We don’t have Google Local here so they could not have registered their address with Google.
    I’m willing to bet they do a lot more with the disparate data than they will ever let on.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at

%d bloggers like this: