Правильный перевод сайта на Ajax

  • Автор темы vdo
  • Дата начала
Статус
Закрыто для дальнейших ответов.
V

vdo

Гость
#1
Руководством принято решение перевести уже существующий сайт на Ajax.

Что стоит предпринять, чтобы не потерять PR внутренних страниц и индексацию контента?
На сайте более 2000 страниц, не хочется, чтобы все они индексировались на новой платформе "с нуля".
 
V

vdo

Гость
#2
Вопрос снимается. Нашел пример с уникальными адресами страниц.
 
V

vdo

Гость
#4
Для: Серёга
гугл видит явно старую страницу:
backbase.com/go/home/company/news/008_ohra_backbase.php
а она склеена с новым адресом и в браузере имеет вид:
backbase.com/#home/company/news/008_ohra_backbase.xml
 
G

Gisma

Гость
#5
бррр, день тяжелый был но я нифига не понял
 

Гость
#6
Цытыта:

How will you support search engine indexing?

Search engines point "robot" scripts to a website and have them accumulate a collection of pages. The robot works by scooting through the website, finding standard links to standard URLs and following them. It won't click on buttons or type in values like a user would, and it probably won't distinguish among fragment identifiers either. So if it sees links to http://ajax.shop/#Songs and http://ajax.shop/#Movies, it will follow one or the other, but not both. That's a big problem, because it means an entire Ajax application will only have one searchlink associated with it, and you'll have miss out on a lot of potential visits.

The simplest approach is to live with a single page and do whatever you can with the initial HTML. Ensure it contains all info required for indexing, focusing on meta tags, headings, and initial content.

A more sophisticated technique is to provide a Site Map page, linked from the main page, that links to all URLs you want indexed with the link text containing suitable descriptions. One catch here: you can't link to URLs with fragment identifiers, so you'll need to come up with a way to present search engines with standard URLs, even though your application would normally present those using fragment identifiers. For example, have the Site Map link to http://ajax.shop/Movies and configure your server to redirect to http://ajax.shop/#Movies. It's probably reasonable to explicitly check if a robot is performing the search, and preserve the URL if that's the case - i.e. when the robot requests http://ajax.shop/Movies, simply output the same contents as the user would see on http://ajax.shop/#Movies. Thus, the search engine will index http://ajax.shop/Movies with the correct content, and when the user clicks on a search result, the server will know (because the client is not a bobot) to redirect to http://ajax.shop/#Movies.

Search engine strategies for Ajax applications has been discussed in a http://www.backbase.com/#dev/tech/001_desi...ias_for_sea.xml detailed paper by Jeremy Hartlet of Backbase. See that paper for more details, though note that some advice is Backbase-specific.
 
Статус
Закрыто для дальнейших ответов.