Archive for February, 2015

Net Neutrality: The Anguish of Mediocrity

Saturday, February 28th, 2015

It is rare for me to be on the same side of an issue as AT&T and Verizon and on the opposite side of Sprint and T-Mobile, but I think the new Net Neutrality rules that the FCC adopted this week are a mistake that will hurt consumers and the telecom industry.

I won’t take the time to go point-by-point through the various elements of the new rules. Plenty of people smarter than me on regulatory topics have written about that elsewhere. The two aspects that really have me concerned are:

  1. the inability to prioritize paid traffic
  2. the inability to impair or degrade traffic based on content, applications, etc.

I believe that these restrictions will lead to networks that will perform much more poorly than they need to.

The Importance of Prioritization

Thirteen years ago, while I was chief strategist for TeleChoice, I wrote a whitepaper using some tools that we had developed to evaluate the cost to build a network to handle the traffic that would be generated by increasingly fast broadband access networks.

In the paper I say “ATM, Frame Relay, and now MPLS have enabled carriers to have their customers prioritize traffic, which in turn gives the carriers more options in sizing their networks, however, customers have failed to seriously confront properly categorizing their traffic. There has been no need to because there was no penalty for just saying ‘It’s all important.’”

With the new rules, the FCC ensures that this will continue to be the case.

Think about it. If you live in a city that suffers from heavy highway traffic, if you’re sitting in slow traffic and you see a few cars zipping along in the HOV lane, don’t you wish you were allowed into that lane? Of course you do. Hopefully it even gets you to consider making the change necessary to use that lane. Why do HOV lanes even exist? Because it was deemed a positive outcome for everyone if more people would carpool to reduce the overall traffic. Reducing overall traffic would have many benefits including reducing the amount of money needed to be spent to make the highway big enough to handle the traffic and at the same time improving the highway experience for all travelers.

Continuing the analogy, if you’re sitting in slow traffic and you see an ambulance with its lights flashing driving up the shoulder to get a patient to the hospital, do you consider it an unfair use of highway resources that you aren’t allowed to use yourself? Hopefully not. You recognize that this is a particular use case that requires different handling.

Finally, extending the analogy one more time, as you’re sitting in that traffic (on a free highway) and you look over and see traffic zipping along on the expensive toll road that parallels the free highway, do you consider whether you can afford to switch to the toll road? I bet you at least think about it.

Analogies always break down at some point, so let me transition into explaining the problem that the new rules impose on all of us. Networks, like highways, have to be built with enough capacity to provide an acceptable level of service during peak traffic. Data access networks, unlike highways, have traffic levels that are very dynamic with sudden spikes and troughs that last seconds or less. While all telecommunications networks have predictable busy hour patterns, just like highways, unlike highways, the network user experience can be dramatically impacted by a sudden influx of traffic. This requires network operators to build enough capacity to handle the peak seconds and peak minutes reasonably well rather than just the peak hour.

Different network applications respond differently to network congestion. An e-mail that arrives in 30 seconds instead of 20 seconds will rarely (if ever) be noticed. A web page that loads in 5 seconds instead of 4 seconds will be easily forgiven. Video streaming of recorded content can be buffered to handle reasonable variations in network performance. But if a voice or video packet during a live conversation is delayed a few seconds, it can dramatically impact the user experience.

Thirteen years ago, I argued that failing to provide the right incentives for prioritizing traffic to take into account these differences could require 40% more investment in network capacity than if prioritization were enabled. In an industry that spends tens of billions of dollars each year in capacity, that’s a lot of money.

Why The New Rules Hurt Consumers and the Industry

Is the industry going to continue to invest in capacity? Yes. But the amount of revenue they can get from that capacity will place natural limits on how much investment they will make. And, without prioritization, for any given level of network investment, the experience that the user enjoys will be dramatically less acceptable than it could be.

Let’s just quickly look at the two approaches to prioritization I called out above that the new rules block.

Paid prioritization is a business mechanism for ensuring that end applications have the right performance to create the value implied by the end service provider. This is the toll road analogy, but probably a better analogy is when a supplier chooses to ship via air, train, truck, or ship. If what I’m promising is fresh seafood, I’d better put it on an airplane. If what I’m promising is inexpensive canned goods with a shelf life of years, I will choose the least expensive shipping method. Paid prioritization enables some service providers (e.g. Netflix or Skype) to offer a level of service that customers value and are willing to pay for that requires better than mediocre network performance, and for the service provider to pay for that better network performance to ensure that their customers get what they expect. The service provider (e.g. Netflix or Skype) builds their business model balancing the revenue from their customers with the cost of offering the service. This approach provides additional revenue to the network operators enabling them to invest in more capacity that benefits all customers.

Impairing or degrading traffic based on content or application is a technical mechanism that enables the network to handle traffic differently based on the performance requirements of the content or application. An e-mail can be delayed a few seconds so that a voice or video call can be handled without delay. This allows the capacity in the network to provide an optimized experience for all users.

Obviously, these mechanisms provide opportunities for abuse by the network operators, but to forbid them outright, I believe, is damaging to the industry and to consumers, and a mistake.

The Intelligence Revolution for Churches (Part 2)

Tuesday, February 24th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Over the past several posts I’ve introduced the Intelligence Revolution and put it in the context of the broader Information Age. I’ve provided a working definition (The Intelligence Revolution will help us better understand the world around us; will improve our decision making to enhance our health, safety, and peace of mind; and will enable companies to better serve us based on the correlation and analysis of data from the interrelation of people, things, and content), I’ve identified the “power” and the “danger” of the Intelligence Revolution, and in the last post I started to answer the question of what the Intelligence Revolution will mean for each of our churches. However, last month’s column used a specific example to demonstrate the risks we face if we are too aggressive in collecting and correlating data about our congregants. What are the more positive ways that large churches can consider using big data?

Revisiting the Danger

Last month I started by making the point that most churches are too small to ever have the data or the capabilities to fully participate in the Intelligence Revolution. But to consider how large churches could potentially leverage big data, I referenced an article by Michael D. Gutzler in the Spring 2014 issue of Dialog: A Journal of Theology. In the article, titled “Big Data and the 21st Century Church,” the Lutheran pastor made the claim that “data collection and analysis could be the key to providing a deeper faith life to the people of our congregational communities.” As I introduced the approach that Pastor Gutzler advocates, I’m guessing that many of you became increasingly uncomfortable. His approach would correlate personal information (including derived assumptions about personal income) with giving, attendance, and commitment to spiritual growth, amongst other data points. His goal was to identify the actions that the church could successfully take for specific families to draw them more deeply into the church.

A few weeks ago, I discussed the article with a Christian friend who has been the data scientist for a major retailer, the chief data scientist for a big data consultancy, and is currently the manager of data analysis for a major web-based service. The approach Pastor Gutzler outlined concerned her, I think in large part because of its reliance on personally identifiable information (PII). Increasingly, regulations are being crafted and enacted to protect PII, especially in light of the growing threat of fraud and identity theft. The high profile cases of credit card data theft from retailers, e-mail and password theft from online sites, and the very broad theft of information from Sony should make it clear to all of us that we risk the reputation of our churches (and by extension, Christ Himself) the more that we collect, store, and correlate information about people that can be personally linked back to them and potentially used to their detriment. But I think she was, as many of us were, also concerned by the types of information being collected and the inferences being made from it. Would we be embarrassed if our constituents found out about the information we’re collecting and how we are using it? If so, then our actions likely aren’t bringing glory to God.

Searching for the Power

Then is there anything good that the Intelligence Revolution can do for large churches? The answer will depend on the church, but I think there’s some potential.

Whenever I talk to businesses about the Intelligence Revolution, I emphasize that they start first with the mission of their business. Is there any data that, if available, could help them to better serve their customers in accomplishing their mission? Likewise, each of us should start with the mission of your church. I know there are different views on the mission of the church, so I won’t try to lay out a comprehensive definition that all readers can agree to, but I’m guessing we all can agree that the Great Commission is at least an important part of the church’s mission. In their book What is the Mission of the Church?, Kevin DeYoung and Greg Gilbert summarized it down simply to this: “the mission of the church – as seen in the Great Commissions, the early church in Acts, and the life of the apostle Paul – is to win people to Christ and build them up in Christ.” This follows directly from Christ’s own words in Matthew 28:18-20 “All authority has been given to Me in heaven and on earth. Go therefore and make disciples of all the nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit, teaching them to observe all things that I have commanded you; and lo, I am with you always, even to the end of the age.”

If we just start with this as at least part of the mission of the church, what data could help us in our Gospel outreach efforts, and what data would help us to build our people up in Christ? Many churches reflect these two dimensions of their mission as the outward facing and the inward facing aspects of their mission, and I’m guessing that the data that we could use will correspondingly come from outward and inward sources.

For decades, churches have used external sources of data to learn more about their city and how they can best reach the unchurched and the lost. The Intelligence Revolution is rapidly increasing the sources of data that are available. Demographics, crime data, addresses of certain types of businesses and facilities, all of these sources of data are becoming increasingly available and searchable. George Barna, who has long been a source for the church of information on national and global trends, has even introduced customized reports on 117 cities and 48 states.

However, to help our congregants grow in their knowledge of God and their ability to observe all that Christ commanded, we likely need to look inside – at the data that we have about our own people. What are their abilities? What are their desires? Where do they live and work? In what ways and in what settings do we touch them today? How do we leverage these opportunities and create additional ones to build them up in Christ? If we have a large enough population, we should be able to anonymize the data for our analysis and decision making. On an aggregate basis, what do we know about the people who attend the early worship service and how should that affect our interactions with them there? What do we know about those in our singles ministry and what opportunities can we create for that group to help them mature and grow?

Obviously, this isn’t fundamentally different from how we make decisions today, but the potential promised by the Intelligence Revolution is that we will have more data and greater ability to work with it, so that we can be more precise and make decisions with greater confidence, helping our churches be more successful in achieving our mission, all to the glory of God.

The Intelligence Revolution for Churches (Part 1)

Tuesday, February 24th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Over the past few posts I’ve introduced the Intelligence Revolution and put it in the context of the broader Information Age. Three posts ago I provided this working definition: The Intelligence Revolution will help us better understand the world around us; will improve our decision making to enhance our health, safety, and peace of mind; and will enable companies to better serve us based on the correlation and analysis of data from the interrelation of people, things, and content. Over the past two posts I’ve identified the “power” and the “danger” of the Intelligence Revolution. This article will address the question that you’ve probably been pondering over the past several months – what will the Intelligence Revolution mean for my church?

Different Kinds of Churches

To be honest, I doubt that the Intelligence Revolution will ever significantly impact how many (most?) churches go about serving the Lord. According to the 2010 Religious Congregations and Membership Survey, there are nearly 333 thousand Christian congregations serving over 144 million adherents (adherents is the broadest measure of people associated with a congregation – this represents nearly half of the U.S. population). The simple math tells us that there’s an average of 432 adherents per congregation. In reality, most churches are much smaller than that. According to the 2012 National Congregations Study, the median number of people associated in any way with a congregation is 135 and the median number of attendees at the main worship service is 60. The Intelligence Revolution derives value from “big data” analysis, and with groups of people this small, there simply won’t be data that is big in volume, velocity, or variety. At churches this size, there also tends not to be the resources to do fancy analysis of whatever data might be available.

Bottom line, these churches will keep doing what they’ve always done, serving the Lord and serving their communities in Christ. I attend a small church. We don’t need fancy data analysis tools to understand the people we serve, because we have deep personal relationships within the body. We know each other’s needs, gifts, and lives. We adapt as new needs arise (as new families arrive or changes happen within families), as new gifts and talents emerge, and as we grow closer to each other in growing closer to the Lord. Just as PCs, the Internet, the smartphone, and social media have provided tools that enhance what we do and make it easier to do it, I expect that the Intelligence Revolution will provide some tools that will make it easier to see the geographic distribution of our families, the concentrations of ages that we serve, and the participation we have in different ministries, but that is simply putting a precise point on the facts that we already inherently know because we know our own small population.

Can Big Churches Benefit From Big Data?

Michael D. Gutzler wrote an eye opening article for the Spring 2014 issue of Dialog: A Journal of Theology. In the article, titled “Big Data and the 21st Century Church,” the Lutheran pastor made the claim that “data collection and analysis could be the key to providing a deeper faith life to the people of our congregational communities.” While we’ve talked about the dangers of collecting personal information in previous articles, Pastor Gutzler says “I would suggest for those working in the life of the church there is a higher calling to data analysis: to help the participants in a community of faith come to a greater understanding of God’s forgiveness, grace and love.”

As his starting framework, Pastor Gutzler rests upon the Circles of Commitment model promoted by Saddleback Church and documented in Rick Warren’s The Purpose Driven Church. The goal for church leaders, in Pastor Gutzler’s model, is to move adherents from being in the unchurched community to the crowd of regular attenders to the congregation of members to the committed maturing members and finally into the core of lay ministers. To accomplish this goal, church leadership analyzes data about each family and family member in the congregation, correlating that data with participation in specific events and activities, examining historical trends, and from that, making wise decisions.

For example, does participation in a given event or activity correlate with increased commitment to the church, no change, or actually a moving away from the core? Do the answers differ based on the current circle of commitment of different families participating? Should we do more events/activities like this or scrap them altogether? Should we target them towards specific families rather than broadly offering them to the entire congregation?

Pastor Gutzler even argues for targeting the sermon message differently for each circle of commitment. He uses the example of a sermon on stewardship: “A better way to approach the subject would be to give one general message about what stewardship is, but have illustrations that speak to each circle. Then, to emphasize the message, a follow-up communication should be sent to each group that falls into each of the demographics to further emphasize the message’s point.”

Pastor Gutzler identifies five classes of data that most churches are already collecting as being enough to get started in implementing this segmentation, targeting, and analysis-driven decision making:

  • Attendance: at worship, but also at all other church-related events
  • Community Life: tracking the amount of time congregants invest in different church activities
  • Personal Information: Pastor Gutzler makes the point that, with tools like Zillow and salary.com, even simple information like address and occupation can provide significant insights that can be correlated with other sources to indicate the family’s financial commitment to the ministry of the church.
  • Personal Giving: Not just tithes and offerings, but also donations of food, clothing, and responses to other special appeals.
  • Personal Development: Time committed to opportunities to develop and deepen their faith life.

While I respect Pastor Gutzler’s passion for using every tool available to achieve the mission of his church, I fear that he is demonstrating the “grey areas” that I warned about in my last article. Our actions will be scrutinized by the watching world and by our own church members. We are to honor and glorify God, reflecting His attributes in loving and serving those around us. We are not to trust in a mechanical, scientific exercise in data analysis, but we are to trust in the living God who works in mysterious ways, drawing people to Himself.

All that being said, I believe that, especially large churches do and will have “big data” at their fingertips. Pastor Gutzler’s article may go to an extreme, but by doing so, I think it hints at ways that churches will be able to honorably improve how they serve their congregants while respecting their privacy. We will discuss this more in the next article in this series. I urge you to rely heavily on prayer and the Word of God as you move your churches forward in this coming revolution.

Ten Strategic Issues Facing Mobile Operators

Monday, February 23rd, 2015

In a recent consulting engagement, I was asked about the strategic issues facing U.S. mobile operators. I think I answered reasonably well, but it made me realize that the topic deserved a more thoughtful updating based on recent activities. With that in mind, I’d like to provide a high level outline of what I think are the biggest issues. I think each of these could be a future article in and of themselves.

1. Duopoly, The Rule of Three, or the Rule of Four
Perhaps the biggest strategic issue being played out right now is one of industry structure. Each quarter, Verizon and AT&T become stronger. Their strong balance sheets, fueled by rich cash flows, enable them to strengthen their hand. Meanwhile, the other two national operators (Sprint and T-Mobile) fight it out for third place. The Rule of Three claims that any market can only support three large generalists, implying that only one of those two can survive. Boston Consulting Group takes it a step further with their Rule of Four implying that perhaps two is the right number. American regulators would apparently block a combination of Sprint and T-Mobile, believing that a market with four competitors is better for consumers than a market with three competitors. But, in the long run, will that ultimately result in the failure of both #3 and #4, and in the short run, will it cause behaviors that damage the entire industry?

2. Wildcards: Google, Dish, América Móvil
Over the past few years, Google has done an admirable job of shaking up the broadband industry with the introduction of Google Fiber. In markets where the company has announced plans to build out local infrastructure, existing competitors have had to respond with improved offers to customers. Now, Google is rumored to be preparing to offer wireless services. Would they have a similar impact on the wireless competitive space, or are the disruptive moves already being introduced by T-Mobile and Sprint significant enough that Google’s impact would be muted? Meanwhile, Dish Networks has been spending tens of $billions accumulating a rich treasure chest full of spectrum which they are obligated to begin building out for wireless services. What will they do and how will that impact the competitive environment? Finally, América Móvil has spent the past few years preparing for a major global strategic shift. They already have a strong foothold in the U.S. prepaid market as an MVNO (TracFone), but their relationship with AT&T has been significantly altered perhaps positioning them for a more aggressive move into the U.S. Any of these three potential new entrants could have significant impacts on the American mobile market and must factor into the strategic scenarios for the four mobile operators.

3. Licensed versus Unlicensed Spectrum
As we’ll discuss more below, spectrum is the lifeblood of any wireless network. The global mobile industry has been built on licensed spectrum. Licensed spectrum has many advantages over unlicensed spectrum, including the ability to use higher power radios with better signal-to-noise resulting in greater range, throughput, and performance. Lack of unmanaged contention for the airwaves results in predictable and manageable performance, all resulting in higher reliability of each connection. The industry has invested hundreds of $billions to build out networks that provide a wireless signal for the vast majority of the U.S. However, the cost to build out a wireless network with unlicensed spectrum is a small fraction of that to build with licensed. Companies offering services with unlicensed spectrum are also unburdened by the regulatory requirements placed on Commercial Mobile Radio Service operators. The Cable MSOs have been most aggressive in shifting their focus from licensed to unlicensed spectrum. After decades of positioning to participate in the traditional cellular industry (winning spectrum in auctions, investing in Clearwire, partnering with Sprint, etc.), in 2012 Comcast, Time Warner, and others sold their licensed spectrum to Verizon and aggressively started building out a nationwide WiFi footprint using unlicensed spectrum. About a month ago, Cablevision introduced their Freewheel WiFi-based smartphone service to compete with mobile operators. Expect others to follow.

4. Spectrum Portfolio
Although mobile operators are toying with unlicensed spectrum, their strategies remain very centered on licensed spectrum. To effectively meet the growing demand for capacity, all operators will need more spectrum of some kind. However, not all spectrum is equal and operators know they need a balanced portfolio. There are a variety of criteria that factor into the attractiveness and utility of any given spectrum, but the easiest to understand is simply whether the spectrum is low-band, mid-band, or high-band. Low-band spectrum has a frequency less than 1GHz and provides the best geographic coverage (the signal travels farther) and in-building penetration (the signal passes more easily through walls). However, at these lower frequencies, there tends to be less spectrum available, and it has generally been made available in smaller channels, limiting the capacity (the amount of bandwidth that can be delivered to customers). High-band spectrum generally has a frequency above about 2.1GHz and, while it lacks the coverage of low-band spectrum, there’s generally more of it and it generally comes in larger channels providing lots of capacity. Mid-band spectrum (between 1GHz and 2.1GHz) provides a compromise – reasonable (but not outstanding) capacity with reasonable (but not outstanding) coverage. In the early 1980s, as the local telephone monopolies covering most of the country, Verizon and AT&T received free 800MHz low-band spectrum in each market they served. In 2008, the FCC auctioned off 700MHz low-band spectrum. Of the national players, only Verizon and AT&T had deep enough pockets to compete and walked away with strengthened low-band spectrum positions. Today, these two have the vast majority of low-band spectrum and T-Mobile and Sprint are hoping that the 2016 600MHz incentive auction will help them begin to balance their portfolios and are demanding that the FCC enact rules to avoid another Verizon/AT&T dominated auction process. All players have reasonable amounts of mid-band spectrum (with AT&T and Verizon again using their strong balance sheets to further strengthen their positions in the recent AWS auctions). The majority of Sprint’s spectrum is high-band 2.5GHz spectrum.

5. Network Technologies
Mobile operators face a number of strategic decisions over the next few years related to network technologies. There are enough uncertainties around the key decisions that each operator has a slightly different strategy. Two of the biggest decisions relate to small cell deployments and migration to Voice over LTE (VoLTE). AT&T has the most comprehensive strategy which revolves around their broader Velocity IP (VIP) Project, which they hope will free them from much of the regulatory oversight they currently endure in their monopoly wireline footprint and therefore provides tremendous financial incentives. This is driving a relatively aggressive small cell deployment and a moderately aggressive VoLTE plan. Verizon has been the most aggressive of the national players in deploying VoLTE, while (until recently) being the most hesitant to commit to significant small cell deployments.

6. Cash Management

6a. Capital Expenditures
None of this is cheap. It takes deep pockets to acquire spectrum and even deeper pockets to build it out. In a technology-driven industry, new network architectures will always require significant investments. As price wars constrain revenue, while demand for capacity continues its exponential growth, CapEx as a percent of revenue will likely become a significant strategic issue for all operators.

6b. Expense Management
Operating expenses and overall cash flow also can’t be overlooked. Growing demand for capacity and small cell deployments require increasing backhaul spend (although the shift to fiber for macro sites has helped bring that under control for most operators). But the biggest issue will likely continue to be the cost of providing smartphones and tablets to customers. As an illustration of how significant this cost is for a mobile operator, in Sprint’s 2013 Annual Report, the company reported equipment net subsidies of nearly $6B on service revenues of just over $29B (over 20%). In 2012, T-Mobile introduced equipment installment plan (EIP) financing as an alternative to subsidies and early in 2013 announced that it was eliminating all subsidies. Since then, the other three national operators have similarly introduced device financing. From an income statement perspective, this helps T-Mobile’s earnings since the device is accounted as an upfront sale, typically near full price. However, T-Mobile and their competitors have introduced zero-down zero interest (or close to it) terms, and they are discounting the monthly bill for the customer by roughly the same amount as their monthly equipment financing payment to keep the total monthly cost to the customer competitive with the traditional subsidized plans. The net result is that T-Mobile (and their competitors who have all followed suit) are taking on the financing risk without significantly improving their cash flow. For 2014, T-Mobile reported just over $22B in service revenues (a 17% increase over 2013). They also reported equipment sales of $6.8B (a 35% increase and 30% of service revenues). But, they also reported the cost of equipment sales at $9.6B (an increase of 38%) and they reported that they financed $5.8B in equipment sales (an increase of 75% over 2013 and 26% of service revenues). As of the end of 2014, T-Mobile had $5.1B in EIP receivables (an increase of 78%). That’s a lot of cash tied up in customer handsets. The strategy has worked in terms of attracting customers to switch to T-Mobile (which is why their competitors have had to respond), but it’s less clear that it’s been financially beneficial for the company in the long run. Verizon, for one, seems unconvinced and has been unenthusiastic about device financing. I believe this will continue to be an area of strategic deliberations at all mobile operators.

7. Plan Types
This shift from subsidized devices is also part of a disruption in how the industry views plan types. For decades, the industry focused on postpaid phone plans. These plans were subsidized, but the customer was locked in for two years, “ensuring” that the operator earned back their up-front investment in the device. Because operators, for the most part, managed this business with appropriate discipline, only prime credit customers could get a subsidized device and these tended to be fairly profitable customers. Those that didn’t qualify settled for a prepaid plan where they purchased the phone upfront at or near full price, which provided better cash flow early in the customer life, but less profitability over time. Eliminating subsides also eliminates the 2 year service plan (although the long term device financing still provides customer lock in) blurring much of the distinction between postpaid and prepaid. The number of people with multiple wireless devices is also increasing as we are carrying iPads and other tablets, as automakers are integrating wireless connectivity into the cars we drive, and as we move towards a day when virtually any product with a power supply will be wirelessly connected to the Internet. Different operators are taking different approaches to how to structure their plans to accommodate these changing customer behaviors within their business models, and I’m sure it will continue to be a topic for internal debate and discussion as the industry models evolve.

8. Commoditization
In many respects, wireless service is increasingly viewed as a commodity by customers. Operators continue to trumpet their network differentiation, but to the consumer there is generally the perception that all operators offer the same devices, in the same ways, and support those devices with networks that work reasonably well just about everywhere we go. Over the past 6 to 12 months, T-Mobile and Sprint have been very aggressive about reducing pricing or offering more for the same price, in a successful effort to take customers away from Verizon and AT&T. Those two larger operators have had to respond with lower prices or increased buckets of data. The operators may be denying it, but it sure looks like a commodity market to me, and I imagine that’s a discussion that’s happening in each operator’s strategic planning meetings.

9. Quad Play or Cord Cutting
For well over a decade, there’s been an ongoing strategic debate within the industry about whether a combined wireless and wireline bundle is critical to market success. At times, some players have decided that it will be and have taken actions, such as the strategic alliances between cable MSOs and wireless operators (Sprint, Clearwire, and Verizon), or advertising campaigns focused on integration across multiple screens (TV, computer, phone). So far, there’s little evidence that it really matters. Consumers take what landline voice, broadband, and video services they can get from the duopoly of cable provider or “telephone” provider and then they can choose from a competitive landscape for their mobile needs. For the last few years, it appears that none in the U.S. industry have seen any need to focus on a quad play future. In fact, the focus has been more on cord cutting and over-the-top players. However, in Europe, there’s a very different story playing out and it is driving massive industry consolidation. Especially while wrestling with the questions about commoditization, operators will once again question the benefits of a differentiating bundle.

10. Re-intermediation
Another common tactic to combat commoditization is to “move up the stack.” In the mobile industry, that would be “move back up the stack.” The introduction of the iPhone, followed by Android devices, led to the disintermediation of the mobile operator from much of the value chain. Prior to the iPhone, operators carefully managed their portfolio of phones, telling OEMs what features to build and it was the operators who largely drove demand for different devices. Operators collected the vast majority of revenues in the industry, directly charging the customer for the phone, the network service, any applications, any content, and any value added services (such as navigation or entertainment). The iPhone (and then Android) enabled better apps and content, provided a better marketplace for buying them, and provided an open connection to the Internet for a wide variety of over-the-top services. Although the operators had poorly managed the apps/content/services opportunity and therefore they didn’t have much “value add” revenue to lose, they clearly lost the opportunity to be more than just the underlying network. Over the past several years, the industry has tried to claw its way back up the stack. Operators pursued “open” strategies, introducing APIs for app developers and other tactics to try to be a “smart pipe” rather than just a “dumb pipe.” They have also tried to encroach on other industries by offering new mobile-enabled services, such as mobile payments and home security/automation. These efforts have not yet had meaningful success, although AT&T’s progress with Digital Life is promising. If operators want to escape the commodity “dumb pipe” trap, at some point they will need to figure out how to reclaim more of the stack.

Obviously, the mobile industry is dynamic and I expect these 10 topics to drive significant strategic decisions across all operators in the coming months and years. If you’d like to discuss any of these topics, drop me a note.

The Danger of the Intelligence Revolution

Wednesday, February 11th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Every new technology introduces new capabilities that enable us to do things that previously weren’t possible or practical. As technologists, our job is to capture this new power for our organization. But every new technology also creates new potentials that represent risk to ourselves, our families, and the organizations that we serve. As technologists, we are also called on to manage this danger. In this post I’d like to discuss the dangers introduced by the Intelligence Revolution.

Grey Areas

A friend of mine recently asked for my advice. He is pursuing a new career path and faced a decision. Taking one path would position him for systems development opportunities. The other path would position him for big data analytics opportunities. Because I believe that the Intelligence Revolution is happening, and I anticipate that there will continue to be a shortage of data scientists who can work with big data, and because his personal background and strengths are well aligned with data analysis, I told him that the big data analytics path would be one that could create tremendous value for him personally.

But I warned him that pursuing that path may be a challenge for him as a Christian. I believe that it is a path that will pass through many “grey areas” where his moral standards may be challenged.

What do I mean by grey areas? When we’re dealing with information, it’s easy to think of types of information that we should have no problem using (e.g. the user tells us they want us to use that data for our application to personalize results for them), and it’s easy to think of types of information that we know it would be wrong to use (e.g. secretly capturing the keystrokes when a user enters their credit card number and then using that information to make unauthorized charges to the user’s account).

But in reality, there’s a lot of information that falls in between those extremes. Those of us that run websites rely on log data to optimize our sites. We want to know (on an aggregate basis) which pages get the most views, what pages cause people to leave our site, what external links brought them to our site, and any problem areas that might be causing a bad user experience. Our users want our website to work well, and our privacy policy (hopefully) clearly explains that we’re going to use this information in this manner, so this type of information usage is probably just barely creeping from the “white” into the “grey.” But what if we use log data to zero in on one user and track their page by page journey through our website? In some ways, if our motives are pure, and if our published privacy policy allows it, this is just like the above example, but it’s starting to feel a little creepy, isn’t it? Especially if we take the next step and attach the user’s information (their login id and account information) to this usage pattern, it starts to feel a lot like spying, doesn’t it?

Well some companies do exactly what I’ve described and their customers applaud them for it. When I log onto my Amazon account, I’m presented with recommendations based on what I’ve bought in the past, and even based on items I’ve simply browsed in the past. Sometimes it feels creepy, but most of the time I’m thankful for the recommendations and it helps me to find products that will meet my unique needs.

Other companies have been strongly criticized and their customer loyalty has suffered because of their use of similar customer usage information that they were using to improve the customer experience. For example, in 2011, the mobile phone industry suffered a serious black eye when someone discovered that virtually all smartphones had software that collected information about usage and reported it back to the mobile operators. The operators wanted this information because it provided precise location information and information about how well their network worked in each location. That told the operators where their customers went (and where they needed a network) and how well the network actually worked in those places. This enabled better investment decisions so that the operators could provide a better experience for their customers. Unfortunately, the software company (Carrier IQ) that the operators used was collecting information that didn’t seem necessary for the stated goal, and customers weren’t informed about the information being collected and how it was being used. Carrier IQ also didn’t respond well to the situation, all of which forced the mobile operators to remove the software from all their customers’ phones and made it much harder for the operators to provide a good network experience.

What Does That Mean for Us?

Hopefully those examples spell out the danger for us, both as consumers, and as technologists that are tasked with helping our organizations to leverage technology to best serve our constituents.

As consumers, we have to realize that businesses (and governments and others) have more and more information about us – not just what we do online, but in every transaction that we perform with anyone. How that information will be used will not be limited to the ways that we’ve explicitly requested and not even to the ways that companies have told us they would use the information. In a way, I guess, that may serve as encouragement to be “above reproach” in everything we do and perhaps may be a help in restraining sin. We know that God sees everything we do and even knows our heart, which should be motivation enough, but perhaps knowing that companies and men see our actions as well may cause some to act in a more Godly and honorable way. But it’s also rather scary, knowing that, unlike God, men are sinful and companies don’t always act in our best interests.

As technologists, we must view ourselves as wise stewards of the information that we have. Either explicitly or implicitly, those we serve have entrusted us with it and we must protect it and deal with it in an honorable manner, with right motives and a servant’s heart. But, just as Christ explained in the parable of the talents (Matthew 25), we shouldn’t just bury this treasure, we must maximize the value of it for the benefit of those that have entrusted us with it. We must capture the power of information to the good of those we serve and to the glory of God. Key to this will be right motives, transparency, security, and trust.

Mobile Impact Obvious

Monday, February 2nd, 2015

As my recent set of posts imply, I’m thinking quite a bit beyond the “mobility revolution.” A fascinating article at Wired makes it clear that the impact of mobile has become obvious, and when something is obvious, it’s much less interesting to me. (That doesn’t mean that execution and operations minded folks should ignore mobile – now is the time when the real money is obviously being made…)

Reading this article took me back to early 2012. Facebook’s IPO was the big story and the biggest knock on the company was that it lacked a mobile strategy. Today, more than half its revenue comes from mobile and they are being lauded as one of the few to have figured out mobile. Back then, Facebook wasn’t alone. Perhaps setting the tone for the year to come, in late 2011, the world’s largest technology company at the time, HP, ousted their CEO, at least in part, for a failed mobile strategy (the company doesn’t show up in the Wired piece because they haven’t been able to recover to a leadership spot in tech). Later in 2012, Intel’s CEO was forced to resign because of a failed mobile strategy. (Like HP, Intel rarely gets mentioned these days when folks talk about the companies leading the technology industry.)

2012 was the wakeup call. 2015 is showing which companies jumped and which hit snooze.