Customer Surveys at the MBTA

While most data sources at the MBTA focus on customer behavior (how many people ride the route 66 bus on Sundays?) or on our performance (how reliable is the Blue Line?), passenger surveys allow the MBTA to learn about who our passengers are and their opinions and priorities. In this post, we explain the types of surveys we conduct and how we design and analyze them.

What is a survey?

We get information from the public in many different ways. A survey is, specifically, a set of questions asked to a sample of the public that can be generalized to the larger population. Since we generalize survey data, we must be careful in how we design survey questions, distribute surveys, and weight the responses to make sure some types of passengers aren’t over- or underrepresented.

For example, we know that Commuter Rail passengers respond to surveys at a much higher rate than users of other modes. If we do not take this into account, it would appear as if half of our riders are Commuter Rail riders — and their opinions would count accordingly. Instead, we weight down over-represented groups, and weight up under-represented groups, in order to get the best sense of what our actual customer base thinks.

Types of surveys

We use surveys for three main purposes:

  1. to learn about the demographics and travel patterns of our passengers;
  2. to measure satisfaction with our service; and
  3. to gather information to inform specific decisions.

The main survey we conduct is a system-wide passenger survey to gather demographic and other information about our passengers. This effort (called the MBTA Rider Census) is currently underway and requires a very large sample, since we want to be confident about the results down to the bus route level. We use this information to learn about the makeup of our passengers, help plan service and to conduct equity analyses for fare and service changes. The results are also used to weigh the results of other surveys.

MBTA advertisement for the Rider Census

We also conduct a monthly Customer Opinion Panel survey to measure customer satisfaction, the results of which are reported on the MBTA Performance Dashboard. Since the panel survey is entirely online, and only includes people who volunteered for long-term participation in the panel, this effort is supplemented by an annual customer satisfaction survey that uses an intercept methodology. The annual survey serves as an important check on the results of the Panel survey by providing customer satisfaction feedback from less frequent or less online survey-friendly riders. In addition, Keolis conducts customer satisfaction surveys twice a year on our commuter rail service.

Finally, surveys are used as part of specific projects or to inform decisions. For example, we conducted a survey as part of the process to revise our Service Delivery Policy in order to more accurately reflect customer opinions in the standards. This survey asked the public how we should make trade-offs about our service and to help us set thresholds for our service standards.

Survey methodology

We design each survey with several guidelines in mind. The first guideline involves identifying the purpose of the survey, which helps determine other essential questions: how long the survey will be, how and towards whom it will be targeted, and how data will be analyzed and used.

Survey purpose

Each survey needs to identify a general purpose as well as the specific research questions or data needs it aims to address. We aim to limit the time investment of our respondents by making sure that except for demographic questions, each item in the survey questionnaire corresponds directly to one of the identified research questions or data needs.

Sampling plan

A sampling plan is a description of how and to whom the survey will be distributed in order to get a representative sample. Sampling plans and techniques depend on the type and goal of the specific survey. When designing a sampling plan we consider issues like distribution method, translation needs, and times and locations for distribution. For example, in our annual customer satisfaction survey our goal is to sample both infrequent and frequent riders. In order to achieve this goal we distribute surveys in geographically disperse locations and not exclusively at MBTA stations.

Consistency with other MBTA surveys

In order to ensure that data is comparable over multiple surveys and useful for in-depth analysis, we have standardized a number of frequently-asked questions (such as demographic questions, and use of MBTA services questions), question types (such as rating/ranking items on a 7-point scale), and associated translations into other languages (Following our Language Access Plan, we translate all surveys into the seven languages currently most used in our service area). This ensures that data is collected in the same way across surveys where possible.

In order to facilitate this process, we keep a Question Bank of previously-used questions so we can easily access all questions and associated translations for future surveys. Where possible, we use the same questions between surveys to ensure comparability and reduce translation costs.

Using survey results

One example of how we used survey results to influence our decision-making was from the Service Delivery Policy. When developing our new Service Delivery Policy, we asked our customers in a survey to indicate the level of inconvenience (either slight, moderate or extreme) with the various amounts of delay at the first stop of their most frequent trip. The results are displayed in the figure below, showing the percent of people who reported being moderately inconvenienced at each time point, and the percent of people who reported being extremely inconvenienced at that time point.

Chart of inconvenience thresholds as reported by MBTA riders

The results show that there is a big jump in a perception of moderate inconvenience when the first service is 5 minutes late, and virtually everyone is at least moderately inconvenienced by a 10 minute lateness in the first service. In addition, while relatively few people are extremely inconvenienced at 5 minutes, 41% of our respondents became extremely inconvenienced at 10 minutes.

When reviewing this feedback, we wanted to make sure that we did not set the service standards at a level already unacceptable to our customers. We used this survey data to set the reliability standard for frequent bus service to count buses as late if they arrive more than 3 minutes past the scheduled headway.