How to use Perfect Audience with Google Tag Manager and Turbolinks

We have a web application project that uses Google Tag Manager (GTM) and Ruby on Rails. As a result, we also use Turbolinks. We found that, even with a custom trigger properly configured (read about that in this article), Perfect Audience would only fire the first time we loaded a page, and not for subsequent page views.

Here’s the trick: Don’t use the Perfect Audience tag provided by Google Tag Manager. Instead, create a custom one. Here are the steps:

  1. Go to “Tags” in GTM and click “New”.
  2. Select “Custom HTML Tag” and paste the javascript code that Perfect Audience provides you with (go to “Manage > User Tracking” in your Perfect Audience account to get this).
  3. Under “Fire On”, select the custom trigger you’ve created to fire on virtual page changes.
  4. Save your changes and be sure to Publish!

Once you’ve completed these steps, you can verify that it’s working. Visit your web app in Chrome or any other browser with a developer console. With each page change, you should see network activity to a server with the hostname “”.

How to use Google Tag Manager with Rails and Turbolinks

In a recent project, we wanted to use Google Tag Manager (GTM) along with Rails. The trick, however, is that most Ruby on Rails projects these days use Turbolinks. This post shows how to get up and running quickly.

Note: This setup is generic enough that it should work for any website where page changes aren’t usual. This is generally the case for web apps that utilize javascript front-end frameworks, like Ember, Angular, etc. The only adjustment you should need to make is the particular event you bind the callback to. In this example, we use jQuery to bind to the “page:change” event.

The Problem w/ GTM and Turbolinks

Turbolinks effectively changes how the browser follows a link. Instead of the whole DOM being reloaded with every new page view, Turbolinks just replaces the contents between the body tags. This speeds up page load time but it doesn’t reload the head section, and as a result, reload the javascript you may have included.

In the case of GTM, the first time you load a page of your app, the “All Pages” trigger will fire. Subsequent page visits will not trigger on this, so you won’t get the same sort of script loading behavior you may expect.

The Fix

There are already some articles online and on stackoverflow on how to resolve this issue, however, they seem to have been written some time ago and don’t account for the current version of Google Tag Manager. Here are the current steps you should take to get up and running.

Add GTM and Custom Trigger to Your Site

We added the following code right before the end of the head section. The first section of code notifies GTM when turbolinks changes the page. The second section of code is just the vanilla code that GTM provides to you. Be sure to replace the latter chunk of code with the unique code that Google Tag Manager provides you.

<!-- Google Tag Manager trigger for Turbolinks -->
 <script type="text/javascript">
 $(document).on('page:change', function(){
 var url = window.location.href;

 'virtualUrl': url
 <!-- End Google Tag Manager trigger for Turbolinks -->

 <!-- Google Tag Manager -->
 <noscript><iframe src="//[YOUR ID]"
 height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
 new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
 })(window,document,'script','dataLayer','[YOUR ID]');</script>
 <!-- End Google Tag Manager -->

Wiring Your Tags To Fire

Once you’ve completed the above step, you need to create a Trigger object in GTM. You’ll associate this object with tags in GTM, allowing you to determine which tags “fire” when Turbolinks changes the page.

Follow these steps:

  1. In GTM, click “Triggers”. Click “New”.
  2. Select “Custom Event”.
  3. For “Event name”, enter “pageView”.
  4. Click “Create Trigger”.

To associate a tag with your new trigger:

  1. Create a new tag or edit an existing tag.
  2. Under “Fire On”, click “More”.
  3. Select the trigger you just created and click “Save”.
  4. Click “Save Tag”.
  5. Be sure to click “Publish” once you’ve made all of your changes so that the changes go live.

That’s it. You should be go!

These Companies Sell Your Email To Spam

Wonder why you’ve seen an uptick of spam in your email box? It could be that you created an account with any of these Mobile Device Management (MDM) providers. Here’s an offer email we’ve received at least five times in the last 12 months. We’re no doubt targeted as we’re also in the MDM space.

Subject: Mobile Device Management Software Users

Would you be interested in our recently compiled Mobile Device Management
Software Users List?

These are some of the users list we provide:

* AirWatch
* MobileIron
* Good Technology
* Xora
* IBM Worklight
* Citrix XenMobile
* SAP Afaria
* BoxTone
* Fiberlink
* Symantes Athena

Please let me know your thoughts, so that I can send you more information
for your review.

Best Regards,


Senior Demand Generation Executive

Notice the typo in the domain name of the email ‘From:’ field. This is grade-A scummy SPAM selling the opportunity to SPAM further. Needless to say, we will not be participating.

The indication that some of the largest MDM players in the field are selling off their customer emails to SPAM lists is disturbing.

Apple iOS 9.3 Native App Bundle Identifiers

We had a heck of a time tracking down the bundle identifiers for iOS 9.3. Here is what we came up with. Each has been tested and verified as accurate, at least for iOS 9.3. Feel free to add to the list below!

Note: We’re not sure whether case is important or not.

App Store -
Calculator -
Calendar -
Camera -
Clock -
Compass -
Contacts -
FaceTime -
Find Friends -
Find iPhone -
Game Center -
Health -
iBooks -
iTunes Store -
Mail -
Maps -
Messages -
Music -
News -
Notes -
Phone -
Photos -
Podcasts -
Reminders -
Safari -
Settings -
Stocks -
Tips -
Videos -
Voice Memos -
Wallet -
Watch -
Weather -

Install Apps Remotely to iPads and iPhones


If you’re unfamiliar with MDM, VPP, Supervision, and DEP, read below for a quick primer on how these pieces work together to allow for easy remote app installation. If the pressures of the modern day economy and the strength of your coffee are shouting “ONWARD WITH HASTE!” here are the Cliff’s Notes:

  1. Create an MDM account supporting Apple iOS like SimpleMDM.
  2. Put your devices in supervised mode, if possible. Here’s a helpful video.
  3. Enroll your devices with your MDM account.
  4. Make sure your MDM is configured to use device VPP assignment. If you used the above link to create an account, you’re good to go.
  5. Select the apps you want to install and push them to your devices. Done!

It’s possible!

Though the technology has been available for years, many companies that are new to managing mobile devices are unaware of the great solutions available for installing mobile apps to a fleet of iPads and iPhones automatically. I have personally come across countless instances where an internal IT employee was setting up appointments and call company employees at remote locations to guide them through the process of installing or updating a business app on their device. There’s a better solution, and I’ll outline the best process here.

Enter Mobile Device Management

Mobile Device Management (commonly abbreviated and referred to as “MDM”) is both a product category as well as a protocol.

Apple, Android, Windows, and most modern mobile operating systems implement some degree of MDM. MDM is a communication channel that allows a remote piece of software to configure, protect and monitor the device itself. The degree of this control varies based on OS and configured options, allowing most companies and individuals to strike an agreeable balance between functionality and privacy.

An entire product category has sprung up around this functionality. As interacting with devices via the MDM channel would be next to impossible to do manually, these products simplify the interaction and allow you to leverage the entire capabilities of MDM with a few clicks in your web browser. Leading services such as SimpleMDM, Bushel, and Airwatch are good examples of this.

The nomenclature can be confusing at times, with many offerings touting themselves as EMM (enterprise mobility management), MAM (mobile application management), amongst others. These terms do have different meanings, but t they are often abused. Point being: avoid taking these terms at face value. Drill into the features and functionalities of a product to really understand it.

The subcategory of MDM functionality that is relevant to this article is the ability to manage mobile apps. Most providers will allow you to install free app store apps and paid apps remotely. The more feature-complete ones will allow to you install enterprise iOS apps remotely, too. Using MDM to manage apps is extremely powerful. You can select as many devices as you want and ‘push’ a number of mobile apps to them all at once. Can you imagine having to install a couple apps on 100 devices manually?

Avoid These Pitfalls

The devil can be in the details when it comes to app deployment and I want to outline the most common gotchas and pitfalls.

Apple ID Management

For paid apps, Apple expects a company to utilize their Volume Purchase Program (VPP) to purchase app licenses. Before apps are sent to devices, the MDM is responsible for assigning these licenses so that the end device user doesn’t have to pay for them in advance.

Traditionally, this has been done via Apple ID. The MDM sends a request to the device asking the user to enter their Apple ID info. The license then gets assigned to the user. The problem with this is that it creates a tremendous amount of interaction overhead. Here’s the typical flow:

  1. Device asks user to sign in with Apple ID
  2. Device asks user to join the company VPP program
  3. MDM waits around, checking every so often to see if the user has joined
  4. Finally, MDM assigns the license to the Apple ID of the device user

So, some not so great things happen here. First, the device has to have an associated Apple ID. Not a huge deal if the device is assigned to a person, but what if it’s a shared device, or maybe used as a kiosk? Second, entering an Apple ID and password and joining the program is a lot of extra work when trying to deploy to a ton of devices. Last, the wait period between joining a program and receiving the apps and be long. We’ve seen anywhere from a couple of minutes all the way up to a few hours. This can be enough time to provoke a user to call IT for help.


Apple, starting with iOS 9 allows licenses to be assigned to the serial number of a device. If you are evaluating MDMs, make sure to select one that allows you to assign VPP licenses to devices.

Installation Approval Requests

Even with device-level VPP license assignment, there’s another catch. MDM, by default, isn’t allowed to install an app to a device without getting approval from the device user. iOS will prompt the user, asking them if they’d like to allow the app to install. Again, not a big deal generally speaking but if you’re installing to many, many devices, it’s really a horrible experience to have to go through touching each one.


iOS devices can be put into a mode called ‘Supervised’ that allows MDM to have more control of the device than it is normally allowed to have. When a device is supervised, among other things, MDM can install apps without asking for permission. They just show up.

To take advantage of this, make sure that the MDM you’re using has support for supervised devices. SimpleMDM and Airwatch are both capable of this. Also, place the devices in supervised mode before enrolling them with MDM. You can do this one of two ways:

  1. Use an OS X application called Apple Configurator 2. Connect the devices via USB. Part of this process wipes and resets your device, so make sure you do this step first. Here’s a helpful YouTube video.
  2. Use the Apple Device Enrollment Program (DEP). This program allows you to purchase new devices that are already in Supervised mode. You can even have them automatically enroll in an MDM when you power them up for the first time. DEP needs to be supported by your MDM as well to use it.

 Apps Not Installing After Pushing

If iOS prompts a device user to install an app and the user says no, iOS sometimes gets stuck and won’t allow the MDM to send the request again. Instead, iOS will respond to the MDM with a message stating that the app is already scheduled for management. Here’s a good example of that.


We’ve found that an easy fix is to turn off the device and turn it back on again. Don’t laugh! You can also unenroll the device from MDM management and re-enroll it, but this is a much more tedious process and we don’t recommend it.

With any luck, Apple will fix this apparent bug that’s been around since iOS 7.


Get started by selecting an iOS MDM provider like SimpleMDM, which allows you to enroll your first five devices for free. Let us know if you run into any hitches in the comments section below and we’ll add the solution to this article.

Happy trails!


How to send Apple Push Notifications Across TONS of Accounts

We needed an APNS package for use with a many-tenant MDM platform. Specifically, we needed the ability to quickly push many notifications to devices spanning across many push certificates.

Traditionally with APN, you establish a connection to Apple, send as many notifications as you desire, and then disconnect. The problem we have is that we need to create a separate connection for each account, and simply connecting and disconnecting for each notification will look like a DDoS attack from the perspective of Apple.

The particular project in question was written in Ruby, so we started looking at existing gem solutions.


We started here and eventually forked and added MDM support. apn_sender keeps a persistent connection to Apple, which is great. It doesn’t handle multiple certificates though, so that means we’d have to have a separate daemon process running for every single push certificate.

In fact, most APNS packages were eliminated for this reason: They weren’t built with multiple-certificate handling in mind, meaning something costly would have to be instantiated for each certificate.


This gem sets out to solve multiple-certificate handling, but it fell short in two ways:

  1. It requires the compilation and usage of a fork of ZMQMachine. We don’t want to have to manually compile the gem and we don’t want to depend on someone keeping a fork of a project maintained.
  2. It assumes the bottleneck is in the building of the APNS message, not in the SSL connection setup/teardown with Apple. The gem instantiates many workers for the activity of building the message to be pushed, but by default only runs a single firehose, which is responsible for connecting to Apple. We found the opposite to be true: the connection to Apple is the slowest part.

Houston? Grocer?

These are both outstanding, well used, active projects. They unfortunately have the same problems as most gems in that they were not designed for many, many different APNS accounts.

Alas, we finally found a solution.

Finally: AppleShove!

In brief, AppleShove receives push requests from a Redis queue structure. These push requests include the APNS certificate and the payload to be pushed. A single thread called the demultiplexer reads from this Redis queue and also manages a pool of connection threads to Apple. When a request is received, the demultiplexer sends the request to the appropriate connection thread. If the connection thread doesn’t already exist, it’s created first. That’s it!

For you concurrency fans out there, we are using the Actor concurrency pattern via Celluloid.

This architecture accomplishes a few things:

  1. “Caches” connections to Apple. If we’ve sent a notification with a particular certificate recently, we get to reuse the connection instead of having to re-establish it.
  2. Allows notifications to be sent in parallel. We aren’t waiting for a series of connections and disconnections to take place before we can send notification #n.
  3. Simplifies our client implementation. Since each notification contains all of the information AppleShove needs to send it on it’s way, we can request notifications via a single static method.

To see if AppleShove works well for you, too, check out the project here!


The Importance of Business Dashboards

Shortly before or after publicly releasing a product, I inevitably fashion some form of a business dashboard. I’m amazed at how many products are launched without any instrumentation. To me, this results in a scenario similar to “flying blind”, or only having partial, imperfect information about the health of your product.

A business dashboard is a realtime or near-realtime information screen that provides information about key metrics around your product and business. When set up correctly, they provide numerous benefits.

1. A business dashboard provides motivation

After any product launch, it’s easy to lose steam. You’re working on a marketing campaign and it seems to be increasing website traffic, but is it really helping sales? What sort of ROI are you getting on your effort? Should you continue with it or try something different?

It’s hard to feel motivated when you don’t have a good idea of where you stand and where you’re heading. Having an indication that you’re not doing so great can motivate you to work harder. An indicator that an action you took is paying off can also motivate you to keep it up.

2. A business dashboard gives you metrics of your progress

Tied closely to the benefits of motivation, having a measure of the health of your product is absolutely necessary. These will be crucial when measuring the health of your product over time. They can provide small metrics like “how many people are using this new feature we added?” or larger ones, like “what is the average lifetime value of a customer?”.

3. A business dashboard guides decision making and focuses your effort

Building metrics around key aspects of your business, such as customer acquisition, onboarding, retention, and churn will help you better understand why you’re at the place you are today.

Without business metrics, it’s much easier to base a business decision on the latest article you’ve read on Medium. Having metrics around your business, instead, will scream at you when there’s a problem. Retention may be great, but your customers are getting stuck during the onboarding process. Acquisition is lower this month since that article you were mentioned in became less visible.  With each of these indications, obvious, logical reactions can follow. When you invest time addressing them, you know it’s a worthwhile endeavor as opposed to simply “trying something” and seeing if any sort of measurable result occurs.

They’re good for successes, too. Maybe something you changed has resulted in a spike in new customers. That’s good to know! Perhaps the newsletter you’ve been sending out has increased product engagement and decreased churn more than you imagined. Keep it up!

Build a Dashboard Today

If you don’t have one already, spec one out today. If you’re using third party services, like Stripe, run a google search on “Stripe Metrics”. We’re partial to, but encourage you to check out multiple offerings. Other metrics engines like Google Analytics and Mixpanel are both great but are narrowly scoped. Use them, but try to extract the data and use it to measure bigger metrics and inform the higher-level measures of your business.

Yes, It’s OK to use a Relational Database as a Queue

In software design, it’s not uncommon to come up with the need for a queue. Perhaps you have certain tasks that should be scheduled for background processing. Maybe a type of request needs to be handled in batches.

Why the Flack?

Regardless of the reason, we’ve seen customers and fellow engineers squeam at the suggestion of using a database table to handle the storage of queue items. Here’s an incomplete list of reasons we generally hear:

  1. “We should use a proper queueing technology, like AWS SQS, RabbitMQ, ActiveMQ, Kafka, or IronMQ. MySQL and PostgreSQL aren’t designed for this! This feels wrong!”
  2. “The performance will be horrible! We should at least using something that will be in-memory!”
  3. “If anyone sees a design like this, they’ll think we’re idiots!”
  4. “This won’t scale well when our growth curve starts double hockey sticking!”
  5. “Will it even be transaction-safe? What about handling dead letters? Why even bother?”

Some of these are good points. Performance may matter, eventually. A queueing service may allow for a better eventual architecture. Someone may think you’re an idiot. But lets look at the bigger picture.

A Case For The RDBs

Software architecture can be a challenging process. This is because most software built is to address a problem where the best solution will only be fully realized over a long span of time. Your understanding of what the problem is and how the problem will best be solved will be quite different when you start than when you end, even with adequate discovery efforts, planning, and experience.

You may start out thinking “Hey, I have a bunch of commands that are going to be consumed by a worker process. I should put them in a queue!”. So, you install a queue service on your server and configure a queue. It works great. Look at how fast it is since it’s all in memory! Grabbing an item off of the queue takes 2 ms!

Some time passes and you add more worker processes. Some of these worker processes exist on a server on the west coast and some are on the east coast, to increase uptime in case there’s some sort of locational outage. Great! But, you’re seeing that performance isn’t scaling linearly. Every time you add a worker, it seems that you’re only getting around 60% of the gain you would expect. It turns out that many of the commands in the queue process quicker when they are grouped together and handled by the same worker rather than being interspersed across workers at different locations. Perhaps it has to do with some sort of context/memory switching that the worker has to perform. OK, no problem, we’ll somehow group those items together, right?

Wrong. You’re using a queueing technology that only allows FIFO operations, so there’s no way for you to see further into the queue without popping items off of it. So, bad news.

Another issue arises. Some queue items aren’t processable when you pop them off the queue. You want an hour to pass before retrying them. But wait, you can’t have the worker just sit there and hold onto the item. You also don’t want to set up another queue just for these delayed items. Even worse, you may be using a technology like RabbitMQ that doesn’t really allow for delayed visibility of queue items without some strange dead-letter queue hacking. More bad news!

Yes, there are architectures that can be implemented to get around both of these situations. But… that’s not the point.

What’s the Point, Old Man?

The point is that a relational database, like MySQL for instance, doesn’t have these limitations. It’s not a high performance queueing technology, but it is a very general, flexible tool that will allow you to do just about whatever you need it to compared to a more specific technology like a queueing or messaging service.

You think you know what you want to accomplish, and you may be right, but you may not. I like the saying “build first with wood, then steel”. If you’re unsure, you’re still discovering the right solution, and you’re proving that your software is actually going to get used, work with malleable tools that allow you to make changes and adjustments with ease. If you see, down the road, that you are really being hurt performance wise, make the switch to a queueing system with confidence.

A Smart Thermostat with Geofencing for Under $100

Where We Started

We replaced our non-programmable thermostat some time ago with a Nest thermostat. Besides it’s beautiful aesthetics, we were primarily interested in lowering our heating and cooling utility bills.

Shortly after purchasing the Nest, we realized that the Nest way of reducing our HVAC usage didn’t make sense for our house occupancy patterns. We determined that we needed to control the Nest in a different manner to really make it save money.


Adding Geofencing to the Nest Learning Thermostat

I have what I call an anti-schedule. I don’t head to work and return home at the same times each day of the week. I may work at home until 11:00 AM before heading to the office, or may be home rather late if I have an engagement in the evening. Weekends are even less predictable. I may take off to the coast at an early hour to get a day of surfing in, or, I may sleep in, have a friend over for breakfast and coffee, and enjoy time at home.

The Nest Learning Thermostat is designed to learn your patterns without you having to teach it. How can the Nest learn your schedule when you don’t have one? You quickly end up needing to manually program in a schedule to the Nest which sort of works, but is wrong a good deal of the time.

Nest, in the product’s defense, tries to solve this problem with a feature called Auto-Away. It keeps track of activity near it with a motion sensor. If it doesn’t see motion for a span of time, it assumes you’ve left and sets itself to away, overriding the schedule. If it sees motion again, it sets itself back to home. The amount of time it takes to do this spans somewhere between 15-120 minutes, depending upon learned behavior, according to Nest.

The problem with Auto-Away is that it gets it wrong a good deal of the time. If I’m working from home in my office, Nest will think the house is empty. If I actually do leave the house, the Nest sometimes takes longer to realize I’m gone than I’m actually gone for.

This lead to the question: how can I passively keep the Nest updated on my location at all times? Other thermostat manufacturers achieve this with a technology called geofencing. Geofencing is a way of creating a virtual perimeter around a particular space. When you cross this line, some sort of event occurs. The answer for us was to build an app that could keep track of whether we were within a geofence around our home or not, and update Nest accordingly.

Out of this, we created Skylark, a mobile app that keeps track of whether anyone is home or not and updates your Nest when your home empties out and whenever someone comes home.


Unexpected Realizations

The first month of using Skylark in beta was great. We found that our Nest home/away setting changed instantly whenever we arrived home or left. Finally, our Nest status was based on our actual location, not on what the Nest guessed was our location.

We had a problem though. Sometimes our Nest would still set to away while we were home. Other times, it set to home even though we were gone. Auto-away was still wreaking havoc on the Nest status. We realized that we didn’t need it any longer, as Skylark did a better job of determining our status than it ever could. We disabled it.

With Auto-Away and the schedule learning feature disabled, our Nest was turning more and more into a normal thermostat. This got us to thinking: if we hadn’t already owned a Nest, what would we purchase now? Is it worth paying the Nest premium if we aren’t using most of the features?


Commodity Hardware Options?

We considered building our own thermostat. What would it need to have? A basic thermostat with a schedule would suffice. The only extra feature we’d need is WiFi. From there, we could use Skylark to add the magic. But how would we produce at a large enough volume to justify selling a hardware product like this? How would we compete with a company like Honeywell?

A routine visit to Home Depot turned this question on its head. Honeywell already makes a variety of WiFi enabled thermostats. Could we potentially integrate with one of them, adding the much desired geolocation feature to an otherwise unremarkable thermostat? Yes we could.


The sub $100 Smart Thermostat

After some additional engineering effort, we were able to add Honeywell thermostat support to Skylark. Using almost any Honeywell WiFi thermostat, such as this one ($92 at the time of writing this article), we had a geofenced thermostat solution that is every bit as good at saving energy costs as our Nest thermostat is.

Are the Honeywell WiFi thermostats as aesthetically sexy as the Nest thermostat? Not really. Does it save on energy costs just as effectively at less than half of the price of the Nest? Absolutely, and we think that’s pretty sexy, too.

Skylark for Nest & Honeywell Smart Thermostats is available in the Apple App Store.
Most Honeywell WiFi enabled thermostats should work with Skylark.

Expanding a Multipath iSCSI LUN in a XenServer 6 Pool without Downtime

You’ve increased the size of a LUN used by your XenServer pool and you’re realizing that the size hasn’t updated in XenCenter, even after rescanning your storage repository. A few more (manual) steps are required to get up and running. The good news is that you won’t need to reboot any of your VMs or complete a rolling migration procedure.

A Brief Intro

There are four layers of storage abstraction that we will be dealing with. They exist in this order:

  1. SCSI 
  2. Multipath
  3. LVM
  4. XenServer

The LUN resize you previously completed could be considered layer 0. This document will proceed to describe the steps that need to be taken at each layer to eventually bubble the changes up to XenServer.


The procedures outlined in this article were tested against the following configuration:

  • 3 server XenServer pool
  • Multipath iSCSI-attached LUN
  • XenServer 6

If you are using a different setup, you may need to make changes at any storage layers that are affected.


Complete each of these steps on the pool master.

SCSI Layer

Run this command to determine which devices make up the connection to your iSCSI LUN. You should see a device for each iSCSI path:

mpathutil status

If you have more than one LUN that you’re connected to, you can likely differentiate the LUNs by looking at the [size=*G] readout for each.

Once you’ve determined the devices that are involved, run the following command, substituting the devices for your own. Note: you can use brackets to select multiple devices at once.

fdisk -l /dev/sd[bcde] 3>&1 2>&3 | grep GB

You’ll see that each of the capacities still show the size of the LUN before the increase. Run this command to update them:


If you run the fdisk command again, you’ll see that the new size is now shown. First layer complete!

Multipath Layer

Run the following command again:

mpathutil status

You are looking for the SCSI ID of the LUN we’re working on. It will look something like this: “36001f9300177800002cb000200000000”. Take that value and run the following command to update the Multipath layer:

multipathd -k"resize map [scsi_id]"

You can run “mpathutil status” again to see that the “[size=*G]” has updated.

LVM Layer

First, run this command to get the uuid of the associated LVM physical volume.

xe sr-list

Sub the uuid you retrieve from that command in for this command:

pvs | grep [uuid]

The response of this command will show a device name, likely on the last line of the response. Ours was “/dev/dm-2”. Use this device name in this command:

pvresize [device-name]

Finally, run the “pvs | grep [uuid]” command again to verify that the size has increased.

Update XenServer

Next, we need to notify XenServer. Run this command first to get the uuid of the storage repository, unless you still have it handy:

xe sr-list

Then, run this command, subbing in the uuid of the storage repository:

xe sr-scan uuid=[uuid]

Wrapping It Up

You’ve completed all of the necessary steps for the pool master. Now, repeat these steps on each slave server in the pool. As a final step, you will need to disconnect and reconnect to the pool in XenServer for the interface to reflect the change in size.