Правильный перевод сайта на Ajax

  • Автор темы vdo
  • Дата начала
Статус
Закрыто для дальнейших ответов.
V

vdo

Руководством принято решение перевести уже существующий сайт на Ajax.

Что стоит предпринять, чтобы не потерять PR внутренних страниц и индексацию контента?
На сайте более 2000 страниц, не хочется, чтобы все они индексировались на новой платформе "с нуля".
 
V

vdo

Вопрос снимается. Нашел пример с уникальными адресами страниц.
 
V

vdo

Для: Серёга
гугл видит явно старую страницу:
backbase.com/go/home/company/news/008_ohra_backbase.php
а она склеена с новым адресом и в браузере имеет вид:
backbase.com/#home/company/news/008_ohra_backbase.xml
 
G

Gisma

бррр, день тяжелый был но я нифига не понял
 
G

Guest

Цытыта:

How will you support search engine indexing?

Search engines point "robot" scripts to a website and have them accumulate a collection of pages. The robot works by scooting through the website, finding standard links to standard URLs and following them. It won't click on buttons or type in values like a user would, and it probably won't distinguish among fragment identifiers either. So if it sees links to and , it will follow one or the other, but not both. That's a big problem, because it means an entire Ajax application will only have one searchlink associated with it, and you'll have miss out on a lot of potential visits.

The simplest approach is to live with a single page and do whatever you can with the initial HTML. Ensure it contains all info required for indexing, focusing on meta tags, headings, and initial content.

A more sophisticated technique is to provide a Site Map page, linked from the main page, that links to all URLs you want indexed with the link text containing suitable descriptions. One catch here: you can't link to URLs with fragment identifiers, so you'll need to come up with a way to present search engines with standard URLs, even though your application would normally present those using fragment identifiers. For example, have the Site Map link to and configure your server to redirect to . It's probably reasonable to explicitly check if a robot is performing the search, and preserve the URL if that's the case - i.e. when the robot requests , simply output the same contents as the user would see on . Thus, the search engine will index with the correct content, and when the user clicks on a search result, the server will know (because the client is not a bobot) to redirect to .

Search engine strategies for Ajax applications has been discussed in a detailed paper by Jeremy Hartlet of Backbase. See that paper for more details, though note that some advice is Backbase-specific.
 
Статус
Закрыто для дальнейших ответов.
Мы в соцсетях:

Обучение наступательной кибербезопасности в игровой форме. Начать игру!