Правильный перевод сайта на Ajax

Тема в разделе "Web 2.0, AJAX, Ruby, RSS технологии", создана пользователем vdo, 5 окт 2006.

Статус темы:
  1. vdo

    vdo Гость

    Руководством принято решение перевести уже существующий сайт на Ajax.

    Что стоит предпринять, чтобы не потерять PR внутренних страниц и индексацию контента?
    На сайте более 2000 страниц, не хочется, чтобы все они индексировались на новой платформе "с нуля".
  2. vdo

    vdo Гость

    Вопрос снимается. Нашел пример с уникальными адресами страниц.
  3. admin

    admin Well-Known Member

    8 авг 2003
    Для: vdo
  4. vdo

    vdo Гость

    Для: Серёга
    гугл видит явно старую страницу:
    а она склеена с новым адресом и в браузере имеет вид:
  5. Gisma

    Gisma Гость

    бррр, день тяжелый был но я нифига не понял
  6. Гость


    How will you support search engine indexing?

    Search engines point "robot" scripts to a website and have them accumulate a collection of pages. The robot works by scooting through the website, finding standard links to standard URLs and following them. It won't click on buttons or type in values like a user would, and it probably won't distinguish among fragment identifiers either. So if it sees links to http://ajax.shop/#Songs and http://ajax.shop/#Movies, it will follow one or the other, but not both. That's a big problem, because it means an entire Ajax application will only have one searchlink associated with it, and you'll have miss out on a lot of potential visits.

    The simplest approach is to live with a single page and do whatever you can with the initial HTML. Ensure it contains all info required for indexing, focusing on meta tags, headings, and initial content.

    A more sophisticated technique is to provide a Site Map page, linked from the main page, that links to all URLs you want indexed with the link text containing suitable descriptions. One catch here: you can't link to URLs with fragment identifiers, so you'll need to come up with a way to present search engines with standard URLs, even though your application would normally present those using fragment identifiers. For example, have the Site Map link to http://ajax.shop/Movies and configure your server to redirect to http://ajax.shop/#Movies. It's probably reasonable to explicitly check if a robot is performing the search, and preserve the URL if that's the case - i.e. when the robot requests http://ajax.shop/Movies, simply output the same contents as the user would see on http://ajax.shop/#Movies. Thus, the search engine will index http://ajax.shop/Movies with the correct content, and when the user clicks on a search result, the server will know (because the client is not a bobot) to redirect to http://ajax.shop/#Movies.

    Search engine strategies for Ajax applications has been discussed in a http://www.backbase.com/#dev/tech/001_desi...ias_for_sea.xml detailed paper by Jeremy Hartlet of Backbase. See that paper for more details, though note that some advice is Backbase-specific.
Статус темы:

Поделиться этой страницей