skip to Main Content

I am struggling to find a complete, definitive and up-to-date answer to the Angular SEO problem.

I have an Angular app that has a single main template index.html with multiple views. Views are handled like so:

app.config(["$routeProvider", function($routeProvider) {

  $routeProvider

  .when("/", {
    templateUrl : "views/home.html",
    controller  : "HomeController",
    title: "Home"
  })

  .when("/about", {
    templateUrl : "views/about.html",
    controller  : "AboutController",
    title: "About"
  });

}]);

Now when I go to www.example.com/#/about I get the about page.

The problem: Google does not seem to index the links or content in views.

I’ve seen tutorials explaining how Google now executes javascript, so it should render the views, however when it crawls, it replaces the hashbang with ?_escaped_fragment_=

1: Should I enable hashbangs instead of just the hash in the URL? Will this help with Google crawling my site? Angular explains to do this like so: $locationProvider.hashPrefix('!');

2: Should I aim for clean URLs in HTML5 mode? For example as going to www.example.com/about

3: Is the URL hashbang/HTML5 mode the only thing I need to worry about to get things indexed? Will Google really execute the javascript required to load the view and update the page title, then read the rendered result?

2

Answers


  1. For each page with a hashbang you want crawled by Google, you should provide a corresponding ‘escaped_fragment’ (a snapshot) page that Google will actually visit instead.
    That page should be just HTML.

    To my knowledge, Google will not interpret javascript and load your views.

    I suggest you visit this page: Serious Angular SEO

    So, to answer your questions:

    1: Yes;

    2: No;

    3: You need to provide snapshots for Google, this is where the actual SEO work reside; there is server-side work involved, to render snapshots when _escaped_fragment_ is detected in the url

    Login or Signup to reply.
  2. UPDATE: We are no longer recommending the AJAX crawling proposal we made back in 2009.

    In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. Because “crawlers … [were] not able to see any content … created dynamically,” we proposed a set of practices that webmasters can follow in order to ensure that their AJAX-based applications are indexed by search engines.

    Times have changed. Today, as long as you’re not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers. To reflect this improvement, we recently updated our technical Webmaster Guidelines to recommend against disallowing Googlebot from crawling your site’s CSS or JS files.

    Since the assumptions for our 2009 proposal are no longer valid, we recommend following the principles of progressive enhancement. For example, you can use the History API pushState() to ensure accessibility for a wider range of browsers (and our systems).

    http://googlewebmastercentral.blogspot.com/2015/10/deprecating-our-ajax-crawling-scheme.html

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search