A new mobile friendly testing tool

A new mobile friendly testing tool

Mobile is close to our heart – we love seeing more and more sites make their content available in useful & accessible ways for mobile users. To help keep the ball rolling, we’ve now launched a new Mobile Friendly Test.

The new tool is linked from Search Console’s mobile usability report or available directly at http://ift.tt/1Z2PRiM

The updated tool provides us with room to continue to improve on its functionality, and over time, we expect it to replace the previous Mobile Friendly Test. Additionally, of course this tool also works well on your smartphone, if you need to double-check something there!

We’d like to invite you to take it for a spin, try your website and other sites that you’re curious about! Let us know how you like it – either here in the comments or in our webmaster help forums.

via Google Webmaster Central Blog Read More…

Recovering Your Organic Search Traffic from a Web Migration Gone Wrong

Recovering Your Organic Search Traffic from a Web Migration Gone Wrong

[Estimated read time: 9 minutes]

I know you would never change a URL without identifying where to 301-redirect it and making sure that the links, XML sitemaps, and/or canonical tags are also updated. But if you’ve been doing SEO for a while, I bet you’ve also had a few clients — even big ones — coming to you after they’ve tried to do structural web changes or migrations of any type without taking SEO best practices into consideration.

Whenever this happens, your new client comes to you for help in an “emergency” type of situation in which there are two characteristics when doing the required SEO analysis:

  1. You need to prioritize:
    Your client is likely very nervous about the situation. You don’t have a lot of time to invest at the beginning to do a full audit right away. You’ll need to focus on identifying what hasn’t been done during the migration to make sure that the fundamental causes of the traffic loss are fixed — then you can move on with the rest.
  2. You might not have all the data:
    You might have only the basics — like Google Analytics & Google Search Console — and the information that the client shares with you about the steps they took when doing the changes. There are usually no previous rankings, crawls, or access to logs. You’ll need to make the most out of these two fairly easy-to-get data sources, new crawls that you can do yourself, and third-party “historical” ranking data. In this analysis we’ll work from this existing situation as a “worst-case scenario,” so anything extra that you can get will be an added benefit.

How can you make the most out of your time and basic data access to identify what went wrong and fix it — ASAP?

Let’s go through the steps for a “minimum viable” web migration validation to identify the critical issues to fix:

1. Verify that the web migration is the cause of the traffic loss.

To start, it’s key to:

  • Obtain from the client the specific changes that were done and actions taken during the migration, so you can identify those that had been likely missed and prioritize their validations when doing the analysis.
  • Check that the time of the traffic loss coincides with that of the migration to validate that it was actually the cause, or if there were different or coinciding factors that might have affected at the same time that you can later take into consideration when doing the full audit and analysis.

Screenshot: traffic dropping shortly after a web migration.

To identify this, compare the before and after with other traffic sources, per device & the migrated areas of your site (if not all of them changed), etc.

Use the “Why My Web Traffic Dropped” checklist to quickly verify that the loss has nothing to do with, for example, incorrect Google Analytics settings after the migration or a Google update happening at the same time.

Screenshot from Google Analytics of web traffic dropping.

I’ve had situations where the organic search traffic loss had coincided not only with the web migration but also with the date of a Phantom update (and they had the type of characteristics that were targeted).

Screenshot: Traffic loss after web migration and Google algo update.

If this is the case, you can’t expect to regain all the traffic after fixing the migration-related issues. There will be further analysis and implementations needed to fix the other causes of traffic loss.

2. Identify the pages that dropped the most in traffic, conversions, & rankings.

Once you verify that the traffic loss is due (completely or partially) to the web migration, then the next step is to focus your efforts on analyzing and identifying the issues in those areas that were hit the most from a traffic, conversions, & rankings perspective. You can do this by comparing organic search traffic per page before and after the migration in Google Analytics:

Screenshot: comparing organic search traffic per page before and after the migration in Google Analytics.

Select those that previously had the highest levels of traffic & conversions and that lost the highest percentages of traffic.

You can also do something similar with those pages with the highest impressions, clicks, & positions that have also had the greatest negative changes from the Google Search Console “Search Analytics” report:

Screenshot: the Google Search Console "Search Analytics" report.

After gathering this data, consolidate all of these pages (and related metrics) in an Excel spreadsheet. Here you’ll have the most critical pages that have lost the most from the migration.

Pages and related metrics consolidated in an Excel sheet

3. Identify the keywords for which these pages were ranking for and start monitoring them.

In most cases the issues will be technical (though sometimes they may be due to structural content issues). However, it’s important to identify the keywords for which these pages had been ranking in the past that lost visibility post-migration, start tracking them, and be able to verify their improvement after the issues are fixed.

Screenshot: identifying which keywords the page was ranking for.

This can be done by gathering data from tools with historical keyword ranking features — like SEMrush, Sistrix, or SearchMetrics — that also show you which pages have lost rankings during a specific period of time.

This can be a bit time-consuming, so you can also use URLProfiler to discover those keywords that were ranking in the past. It easily connects with your Google Search Console “Search Analytics” data via API to obtain their queries from the last 3 months.

Connecting URL Profiler to Google Search Console

As a result, you’ll have your keyword data and selected critical pages to assess in one spreadsheet:

Keyword data and selected critical pages to assess in one spreadsheet.

Now you can start tracking these keywords with your favorite keyword monitoring tool. You can even track the entire SERPs for your keywords with a tool like SERPwoo.

4. Crawl both the list of pages with traffic drops & the full website to identify issues and gaps.

Now you can crawl the list of pages you’ve identified using the “list mode” of an SEO crawler like Screaming Frog, then crawl your site with the “crawler mode,” comparing the issues in the pages that lost traffic versus the new, currently linked ones.

Uploading a list into Screaming Frog

You can also integrate your site crawl with Google Analytics to identify gaps (ScreamingFrog and Deepcrawl have this feature) and verify crawling, indexation, and even structural content-related issues that might have been caused by the migration. The following are some of the fundamentals that I recommend you take a look at, answering these questions:

Verifying against various issues your site may have.

A.) Which pages aren’t in the web crawl (because they’re not linked anymore) but were receiving organic search traffic?

Do these coincide with the pages that have lost traffic, rankings, & conversions? Have these pages been replaced? If so, why they haven’t been 301-redirected towards their new versions? Do it.

B.) Is the protocol inconsistent in the crawls?

Especially if the migration was from one version to the other (like HTTP to HTTPS), verify whether there are pages still being crawled with their HTTP version because links or XML sitemaps were not updated… then make sure to fix it.

C.) Are canonicalized pages pointing towards non-relevant URLs?

Check whether the canonical tags of the migrated pages are still pointing to the old URLs, or if the canonical tags were changed and are suddenly pointing to non-relevant URLs (such as the home page, as in the example below). Make sure to update them to point to their relevant, original URL if this is the case.

A page's source code with canonicalization errors.

D.) Are the pages with traffic loss now blocked via robots.txt or are non-indexable?

If so, why? Unblock all pages that should be crawled, indexed, and ranking as well as they were before.

E.) Verify whether the redirects logic is correct.

Just because the pages were redirected doesn’t mean that those redirects were correct. Identify these type of issues by asking the following questions:

  • Are the redirects going to relevant new page-versions of the old ones?
    Verify if the redirects are going to the correct page destination that features similar content and has the same purpose as the one redirected. If they’re not, make sure to update the redirects.
  • Are there any 302 redirects that should become 301s (as they are permanent and not temporary)
    Update them.
  • Are there any redirect loops that might be interfering with search crawlers reaching the final page destination?
    Update those as well.

    Especially if you have an independent mobile site version (under an “m” subdomain, for example), you’ll want to verify their redirect logic specifically versus the desktop one.

Checking redirect logic.

    • Are there redirects going towards non-indexable, canonicalized, redirected or error pages?
      Prioritize their fixing.

      To facilitate this analysis, you can use OnPage.org‘s “Redirects by Status Code” report.

OnPage.org's Redirects by Status Code report

    • Why are these redirected pages still being crawled?

      Update the links and XML sitemaps still pointing to the pages that are now redirecting to others, so that they go directly to the final page to crawl, index, and rank.

  • Are there duplicate content issues among the lost traffic pages?
    The configuration of redirects, canonicals, noindexation, or pagination might have changed and therefore these pages might now be featuring content that’s identified as duplicated and should be fixed.

Duplicate content issues shown on OnPage.org.

5. It’s time to implement fixes for the identified issues.

Once you ask these questions and update the configuration of your lost traffic pages as mentioned above, it’s important to:

  1. Update all of your internal links to go to the final URL destinations directly.
  2. Update all of your XML sitemaps to eliminate the inclusion of the old URLs, only leaving the new ones and resubmitting them to the Google Search Console
  3. Verify whether there are any external links still going to non-existent pages that should now redirect. This way, in the future and with more time, you can perform outreach to the most authoritative sites linking to them so they can be updated.
  4. Submit your lost traffic pages to be recrawled with the Google Search Console “Fetch as Google” section.

After resubmitting, start monitoring the search crawlers’ behavior through your web logs (you can use the Screaming Frog Log Analyzer), as well as your pages’ indexation, rankings, & traffic trends. You should start seeing a positive move after a few days:

Regaining numbers after implementing the fixes.

Remember that if the migration required drastic changes (like if you’ve migrated over another domain, for example), it’s natural to see a short-term rankings and traffic loss. This can be true even if it’s now correctly implemented and the new domain has a higher authority. You should take this into consideration; however, if the change has improved the former optimization status, the mid- to long-term results should be positive.

In the short term results dip, but as time goes on they rise again.

As you can see above, you can recover from this type of situation if you make sure to prioritize and fix the issues with negative effects before moving on to change anything else that’s not directly related. Once you’ve done this and see a positive trend, you can then begin a full SEO audit and process to improve what you’ve migrated, maximizing the optimization and results of the new web structure.

I hope this helps you have a quicker, easier web migration disaster recovery!

via Moz Blog Read More…

More Room to Tweet – Twitter to Stop Counting Photos and Links in 140-Character Limit

More Room to Tweet – Twitter to Stop Counting Photos and Links in 140-Character Limit

More Room to Tweet – Twitter to Stop Counting Photos and Links in 140-Character Limit | Social Media TodayBack in January, there were various reports that Twitter was working on a plan to expand the tweet length limit, taking it from 140 characters to something more like the (relatively) new DM limit of 10k. And while those initial experiments haven’t yet come to fruition, a report on Bloomberg today suggests that tweets are about to get a length limit – of sorts, anyway.

According to Bloomberg, Twitter will soon stop counting photos and links in the 140-character limit for tweets. That means you’ll have an extra 23 to 24 (links and photos respectively) characters to work with, while you’ll still be able to add in links and images to enhance your message.

And that’s good, I guess.

I mean, it’s something.

As noted, a lifting of the tweet length limit has been in discussion for some time, with CEO Jack Dorsey himself adding fuel to the fire with this tweet back in January.

And while many believe that expanding the tweet character limit would erode the core differentiator of the service, this new update is another step in that direction. A safe step, in that it’s changing very little to the Twitter experience, but a step nonetheless.

Worth noting, too, that even in their initial discussions of an expansion to the tweet limit, Twitter made it clear that your timeline was not suddenly going to get clogged-up with massive tweets that took up an entire window – the extended tweets, according to reports, would still be shown in the timeline as normal, but they’d have a ‘Show More’ type option at the end of the 140 character limit, which would enable users to expand the full message.

More Room to Tweet – Twitter to Stop Counting Photos and Links in 140-Character Limit | Social Media TodayToday’s announcement comes as Twitter looks for something – anything – to help them boost engagement and get their engagement and growth rates back on track. As part of their most recent earnings announcement, Twitter showed that their monthly active user rate was virtually static, with no growth at all in their core US market.

More Room to Tweet – Twitter to Stop Counting Photos and Links in 140-Character Limit | Social Media TodayThe platform had hoped the addition of new features like Moments and the introduction of an algorithm-defined timeline could help stimulate growth, but as yet neither innovation’s added anything significant to the bottom line results. The switch to longer tweets – even in a minor capacity – is another change to try and appease users and generate more activity on the platform. And given that a growing number of users are sending screenshots of text anyway, providing more ways for users to expand their tweets, if they so desire, could be a good option.

At least with this change, they’re adding to the tweet length without upsetting users by changing the functionality beyond its established limitations – a minor, but safe, compromise.

Although, even then, you already know how users will respond.

“Instead of extending tweets, why not give us an edit tweet option instead?”

This is the most commonly requested feature, and it’s a constant headache for Twitter – something the three founders recently discussed as part of the platform’s 10th birthday celebrations. The reason they won’t add this functionality is because of how easy it is to alter the context of a tweet, given its brevity, in retrospect. But adding a minor change like removing links from the total character count only seems to poke frustrated users who’re calling for the edit function, so expect to see those comments to increase in response.

Also, it’ll be interesting to see, if links are no longer included in the character count, how many links you can add. Images are capped at four by the system, but you can add in as many links as you like. If they aren’t included in the count, Twitter will need to have a system which detects and restricts their use – otherwise it won’t take long for spammers to work out how to hijack your attention with massive, link-riddled tweets.  

The change is set to be rolled out within the next two weeks, according to Bloomberg‘s report.

via Social Media Today RSS Read More…

Google Ads Now Being Shown in Image Search by @SouthernSEJ

Google Ads Now Being Shown in Image Search by @SouthernSEJ

Business Insider reports that, for the first time ever, advertisements are on their way to Google’s image search results. The rollout is starting slow, beginning with shopping ads which will appear in appropriate image searches. Ads in Google’s image search will take up an entire line of screen space which would otherwise have been occupied by images. A Google representative has told Business Insider that the reason why this ad unit is being introduced is because people already use image search as a starting point for shopping. Apparently, the most common question people have when browsing image search, is how […]

The post Google Ads Now Being Shown in Image Search by @SouthernSEJ appeared first on Search Engine Journal.

via Search Engine Journal Read More…

Google Extends Lengths of Title Tags in Mobile Search Results by @SouthernSEJ

Google Extends Lengths of Title Tags in Mobile Search Results by @SouthernSEJ

Coming hot off the heels of last week’s news that Google has increased the widths of title tags and descriptions, is a new report by Jennifer Slegg who says Google has now increased the title tags on mobile search results as well. Not only have title tags been expanded on mobile, they’ve been expanded even more than they have on desktop. On mobile, site owners now have around 78 characters to work with in their title tags. It’s actually eight characters more than the length of the new desktop title tags which come in at around 71-71 characters.  With seemingly […]

The post Google Extends Lengths of Title Tags in Mobile Search Results by @SouthernSEJ appeared first on Search Engine Journal.

via Search Engine Journal Read More…

Twitter Reportedly to Remove URLs and Images from Character Count by @sayscaitlin

Twitter Reportedly to Remove URLs and Images from Character Count by @sayscaitlin

The days of forcing abbreviations so you can fit your text and a photo under Twitter’s 140-character limit will be soon be history. According to a report from Bloomberg Today, in as few as two weeks, Twitter will no longer count photos and URLs as characters in tweets.

Twitter Reportedly to Remove URLs and Images from Character Count | SEJ

Attaching photos and inserting URLs into tweets currently use as many as 24 characters each—a precious 17% of Twitter’s total maximum character limit—understandably creating hurdles for those who hope to include multiple types of media in a single tweet.

Twitter CEO Jack Dorsey’s interest in new ways to adapt Twitter’s character count was first seen earlier this year when Twitter added the option to add image captions that did not count against the character limit. Rumors that Twitter would be increasing their character limit from 140 to 10,000 characters also surfaced earlier this year. Dorsey aptly responded by tweeting a large screenshot of text. He announced that after observing many users using screenshots to share longer text, Twitter would be rethinking current rules and looking for alternatives to add more characters to tweets.

Considering tweets with images increase engagement by 313%, trying to juggle character count with images (and often a URL on top of that) has become almost a necessity.

Today’s announcement seems to reflect that when Dorsey wrote, “We’re not going to be shy about building more utility and power into Twitter for people. As long as it’s consistent with what people want to do, we’re going to explore it,” he meant it.

Look for this update in the coming weeks.

 

Image Credits

Featured Image: Shutterstock
Screenshot by Caitlin Rulien. Taken May 2016.

via Search Engine Journal Read More…

WhiP ViP

WhiP ViP

Since July 2014 (almost two years!) we’ve been sending out an almost-daily email newsletter summarizing all of the news and views that are fit to print in WordPress – The WhiP.

And it’s been fun. We’ve gained a loyal readership and – I hope – provided you with some entertaining, informative and useful reading over that time. Our last edition was #436 – that’s a lot of WordPress news!

But, as you can probably imagine, the amount of punning, WordPress news collection and general hard yakka doesn’t come cheap and our editors (even the terrible guest editor that had to be fired!) spend anywhere between 2-3 hours per email… which adds up to quite a lot of time.

So what to do?

So we had a bit-of-a-meeting and thought about ads (boring, rubbish, nobody clicks on them), dedicated promotional emails about WPMU DEV stuff (you’d get absolutely sick of us) and just canning it (noooooooooo!).

And finally came up with the following first principles:

  • We care about our members,
  • We want to provide value to our members, and
  • Our members should get all the good shit,

And came up with an alternative plan!

Specifically, that we make The WhiP basically a moreorless WPMU DEV members-only product.

So, from this week onwards we’re going to keep up the pun-laced WordPress amusement just for WPMU DEV members only.

We will, however, be sending out one free email a week, in which we *may* advertise copiously that as a WPMU DEV member you would get all the WhiPs you could possibly want.

But hey, you could also come and join the party – it’s free for 14 days to try out the goodies – and if you don’t like them you even get to keep all the plugins and themes [and WhiPs! – Ed] forever as it’s all 100% GPL.

I hope you can understand why we’re doing this – we can’t all have our own privately funded, loss making, controlling the news machine, just ’cause we fancy it, and at least this way you can still get your weekly fix early on. And if you’re a member, ongoing.

But do feel free to abuse me in the comments too, of course 🙂

Related posts:

  1. Meet The WhiP, Our Daily WordPress Email Newsletter Our new daily email The WhiP is packed with WordPress…
  2. As of Today, We’re Giving Everything Away… For Free! We’re now giving away every single one of our premium…
  3. Upfront 1.2 Is Out, Try It For Free Today! We’re super excited to announce Upfront 1.2, the latest update…

via WPMU DEV Blog Read More…

Deploying Google Tag Manager on Multiple Website Environments

Deploying Google Tag Manager on Multiple Website Environments


Deploying Google Tag Manager on Multiple Website Environments



By /

May 16, 2016



blog-gtm-deploy-multiple-environments-tinypng
Google Tag Manager drastically reduces the difficulty of adding tags to websites with its straightforward interface and built-in debug mode. The ability to test out tags on a live site before publishing is a priceless tool for streamlined deployment of advertising pixels, analytics tracking code and more.

With the latest release of the Environments feature in GTM, we now also have the ability to publish versions of our GTM containers only to specific environments – giving us even greater control over how tags are deployed and allowing additional opportunity to test for quality control prior to publishing updates to live websites.

Control over tag deployment is crucial for many reasons, such as:

  • Ensuring that advertiser pixels only fire on the production site for real visitor traffic
  • QA of robust analytics and custom scripts prior to launch
  • Routing analytics data to the appropriate GA property depending on which environment the hits are sent from
  • Transitioning from hard-coded tags to GTM cleanly, without firing duplicate tags before on-page code is removed
  • And so on…

Thankfully, GTM offers a variety of methods that could be used to deploy tags to different environments, such as:

Many of these methods can be used together or independently for optimal tag deployment. In this post, we’ll review each of these methods and provide some basic examples on how to take advantage of them. Skip ahead to the Environments section by clicking the link above.

1. Blocking Triggers

Blocking triggers can be used as exceptions in GTM to prevent tags from firing if they meet a certain set of criteria. Where triggers tell tags when to fire, blocking triggers tell tags when not to fire – and they overrule any existing triggers applied to tags. We use blocking triggers to separate the functionality of the tag and trigger from the environment; so the tag and trigger remain exactly the same for our staging site and our production site, for example, and blocking triggers are added and removed as needed to control tag deployment. This means you do not need to create a separate set of tags and triggers per environment.

Block by Hostname

For example, if you have GTM on your staging environment but do not want your advertising tags firing there, you can set up blocking triggers that will prevent your ad tags from firing based on the URL being that of your staging site.

I won’t go into detail about this example because my colleague Jon Meck wrote a great article about this topic already – take a look at his blog post for more information on how you can use blocking triggers in GTM.

Block by Version

Another way to use blocking triggers is to base them on a version number that you assign to your website when you publish updates. If you set a data layer variable across the website and increment the value each time you start editing and testing new changes, you can set up GTM to look for that version number in order to determine whether or not certain tags should fire.

For example, if the live site is set to version 4, and you know your version 5 changes are only being tested on the dev environment, you can create a blocking trigger to use as an exception for all new tags that you want to test that says ‘do not fire this tag on any version less than 5’ which essentially means they will not fire unless you’re on the dev environment (for testing purposes) or if version 5 is published to the live site. They will not fire on the live site until the production ‘version’ variable has been updated to ‘5’.

The code on your site would look like this:



The data layer variable in GTM would look like this:

dl-siteVersion

The blocking trigger would look like this:

blocking-versions-less-than-5PNG

This will also allow your tags to be ready and fully functional the moment the live site’s version variable is set to 5 (when your changes are published) – so you won’t have to remember to go back into GTM to publish anything after the site updates are launched. This saves you the trouble of waking at up 4am for a site launch just to update GTM (believe me, it’s not fun!)

2. Lookup Tables

While blocking rules help us determine what environment we’re in, lookup tables in GTM are particularly useful for helping us direct data to the right place based on the environment.

We typically use these for directing Google Analytics data to the appropriate Google Analytics property based on certain criteria. There are a variety of ways we could do this – a couple examples are below.

By Hostname

We can specify that data collected by Google Analytics tags should be sent to a reporting property if the hostname matches the live site, for example, and that the data should be sent to a test property if on the staging site. To do this, we would create a lookup table like this:

lookup-table

And then we would use that lookup table variable in our Google Analytics tags as the Tracking ID field:

pageview-tag

If you have a variety of subdomains that are being tracked with separate GA properties, you can set up the lookup table to account for all of those as well. For example:

multiple-subdomains-lookup

Debug Mode

We can also use lookup tables to account for testing in debug mode within GTM by enabling the debug mode variable and using it in a lookup table variable, which you would assign to your Google Analytics tags as the Tracking ID:

debug-mode-lookup

Combination

You can also use this Debug Mode Lookup table to determine when to use the UA ID Lookup table from above:

cascading-lookup

With this variable as your Google Analytics tags’ Tracking ID field, GTM will first check if you are in debug mode – if so, the data will be sent to your test property in Google Analytics. If you are not in debug mode, GTM will then go to the UA ID Lookup table to determine what property to send the data based on the hostname.

Data Layer Variable

You could also create a lookup table based on a data layer variable. This is especially helpful when attempting to safely transition from hard-coded analytics tracking to implementation in GTM. In a manner similar to that of the version number recommendation for the blocking trigger above, you would basically determine where to send the Google Analytics data based on the data layer variable status. Take a look at Dan’s post to learn more.

3. Separate Containers

Having a separate container for a dev or staging environment used to be fairly common practice (until Environments were announced anyway – more on that below!). As long as you work hard to manage the separate containers and make sure the configurations are mirrored as closely as possible, this can be a helpful solution to deploying certain tags on certain environments.

multiple-containers

The challenge with this method is that you need to make sure the configuration in the test container is exactly mirrored by the live container.

You can export and import container configurations in order to ensure everything stays exactly the same, rather than manually making the same updates in the live container as in the staging container, for example. And for the bold, the GTM API can help with this as well.

4. GTM Environments

Environments in GTM have been a long-awaited feature and we’re excited to put them to use! With environments, we can now tell GTM that we are working with additional environments other than just a live website(s), and we can deploy container versions to specific environments. This makes testing much more manageable than working with multiple containers because it ensures that the tags, triggers and variables that you’re testing are exactly the same on each environment you’re testing on.

Most companies utilize separate environments throughout the process of building and updating their websites, such as with staging or pre-production environments, QA and even dev environments, in order to test changes before publishing them to their production site. You can imagine the benefits to testing robust tracking solutions and custom code implemented within GTM in pre-production environments just as you already do with your usual website updates.

Before we jump into creating environments though, it is important to recognize that we likely do not need to test all of our tags in GTM for compatibility with each and every environment we may have. For advertiser tags, for example, the standard preview and debug mode available in GTM will likely suffice. For more complex tags, however, there are many reasons why we may want to more thoroughly test in a staging environment. For these, we would likely want to test in an environment that closely mirrors the production site in order to keep the GTM container manageable (dev environments change frequently and would be very hard to keep up with).

The Basics

I’ll provide a basic rundown to setting this up, but Simo Ahava already provided a very in-depth look at how this feature works so I encourage you to go check out his article.

To get started, navigate to the Environments section from within the GTM Admin:

enable-environments-with-outline

When open your Environments, you’ll immediately see three default environments:

default-environments

These correspond to the current live container version, the newest created version and the current draft of your container. These are not editable and exist for the functionality of the GTM container that you are used to. To set up your own environments, such as a staging environment, you need to create a new one. Click the red NEW button and name your new environment:

new-environment2

When you create a new environment, you’ll get another message that tells you how to start using your new environment:

start-using-new-environment

This message explains the two primary means of testing tags in your environment: by either sharing a preview link or updating your GTM container script for that environment. I’ll walk through these in a minute. First, we have to decide which container version should be initially published to this environment so we can have something to work with.

Click the Publish To Staging button from this popup window to select a container version. You will probably select the latest version you have available so that your new environment mirrors your live environment to get started.

publish-to-staging

Now that we’ve done that, let’s look at how we use this environment. You’ll see now that the Environments overview screen shows your new environment and you have a variety of actions you can take with that environment:

new-env-actions

As I mentioned earlier, there are two ways to use your new environment:

Share Preview

This option allows you to share the environment’s container configuration with a link. When you do this, you should select the Turn on debugging when previewing option before sharing the link with your colleagues so that they are able to use the built-in GTM debug console to evaluate your updates:

share-preview

When someone opens this link they’ll see the message that they are now in preview and debug mode for your Staging environment for your GTM container:

shared-preview-preview

They can click the link in the message to check out your website in preview and debug mode to review your changes.

Get Snippet

The second way to use your new environment is to publish directly to it, instead of publishing to your live environment. To do this, you need to enable ‘publish to’ feature by customizing the actual GTM container script for your new environment. When you select the Get Snippet option from the Actions drop down, a popup will appear with your updated GTM container script:

install-staging-env-snippet

You may notice that this is the exact same container ID that you’re using on your live site – but this script has been modified to include an authentification token (check out Simo’s post linked to earlier if you want to learn how that works). You will want to replace the GTM container on your selected environment (probably your staging site) with this new code.

Once you’ve made this update to your selected environment, you will be able to publish updates in GTM directly to that environment.

So in your container, after you’ve made some updates and you select the big red Publish button to publish your updates, you’ll now have the option to publish your changes live or publish them to your new environment via the Environment drop down:

publish-container-to

Creating new environments also provides you with the opportunity to enable the Environment Name variable, which you can use to set up a blocking trigger as shown above to prevent tags from firing on the wrong environments in a manner similar to the examples we talked about earlier.

Side note – If you have shared preview links with your colleagues and you would like to reset that link, you can do so from the environment’s Actions drop down – but note that if you do this you will also invalidate the corresponding container snippet (if you have one installed) and you will need to re-install a new one on that environment if you want to continue to use the publish to functionality (because it will have a new authentification token associated with it).

Summary

These are some basic ways to manage GTM tags across different environments – you might customize one or more of these methods to best suit your needs.

We generally prefer to use blocking triggers and environments over separate containers for deploying tags across different environments because they eliminate the need to create separate tags and triggers per environment, which could be challenging to maintain and present the opportunity to configure them differently on accident. Using separate containers also presents the opportunity for the dev or staging container to be published to the live site on accident, which can cause serious problems.

If separate containers are your preferred option, however, you can import and export container configurations to help make them easier to replicate.





via Blog – LunaMetrics Read More…