skip to Main Content

I’m in a big project with 15 sub sites and 13 different schema pages.
Currently, the site is based on ui.route for all pages and my data set by $http angular request.
After tests and trials on search console its looks like google don’t see all my pages except the home page and data from $http request don’t showing up.
What I’m doing wrong?

What I’m doing so far is:

Set base tag in the <head>:

<base href="/" />

Create .htaccess:

RewriteEngine On 
Options FollowSymLinks

RewriteBase /

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ /#/$1 [L]

Add to app.config:

$locationProvider.html5Mode(true);

Exemple to my app.config:

function createState(name) {
    return {
        url: '/' + name + '/:id',
        templateUrl : 'templates/pages/' + name +'.html',
        controller : 'singlePage',
        resolve: {
           pageData: function(getData, $stateParams) {
                var params = $stateParams;
                params.type = this.self.name;
                return getData.getPageData(params.type, params)
           }
        }
    }
}
.state('info', createState('info'))
.state('news', createState('news'))
.state('event', createState('event'))

$urlRouterProvider.otherwise('/');
$locationProvider.html5Mode(true);

2

Answers


  1. Google bots don’t compile JavaScript, ui-router will not work here so when google bots come to crawl URL(no matter what url is), it will always get index page of the website. On server side detect bot by checking user agent and then You can use phantomjs to load angular app and compile html for bot.(that is what i have used for my application. on server i have node js)

    read more from here

    Login or Signup to reply.
  2. Why does the google crawler not follow my links / state changes created by UI Router?

    Well, google crawl bot is able to execute JavaScript (this feature was implemented not long ago).
    But the bot is still crawling URL like all time before. It is checking the href attribute of
    all your a-Tags in your HTML markup and follow them up. If you are using the JavaScript state
    change functionality provided by ui.router the bot will never be able to follow this links.
    It also does not recognize the HTML5 URL route changed. -> so no pages will be crawled / indexed

    You can counteract that with some basic SEO functionalities. But there a still some limitations you
    need to deal with. Some of this limitations are:

    • Social content provided by meta tags. (Sharing a page on facebook while using og:image, etc. will not work with AngularJS E2E binding)
    • The title tag used with E2E binding will not recognize by social media sharings.

    How to make the crawl bot indexing your pages?
    This is pretty easy, just create a sitemap.xml including all your URLs, upload it to your webserver and register it by using google webmastertools. The google bot will now crawl all the URLs you provided in your sitemap.xmland finally it will index your pages/URLs! =)

    <?xml version="1.0" encoding="UTF-8"?>
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
     <url>
      <loc>http://example.com/</loc>
     </url>
     <url>
      <loc>http://example.com/anotherside/</loc>
     </url>
     <url>
      <loc>http://example.com/search/param1/param2</loc>
     </url>
    </urlset> 
    

    We did this and it’s working very well. You can create your sitemap.xml manually. We moved a step farther and automated this stuff. Our XML and ui.routes
    are created on the backend side of our webapplications. So we have a configuration JSON file where we configure all our routes in. A script creates the XML and JavaScript ui.routes
    automatically.

    This is the result of what we did: https://www.google.de/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site:linslin.org&start=0

    If you want to build a nice SEO/Social optimized page, don’t use SPA applications like AngularJS. I would also not prefer to create a precompiler. It makes no sense to create a SPA application and precompile it. Before creating a precompiler you should go back to the roots by using PHP, Node.JS, Java, etc. to create a webapplication.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search