Using the group survey mode, we could obtain a sample size of about 250 to 300, which was a sufficient number of observations to estimate a WTP function based on the pair-wise choice questions. We set a target of 275 respondents, which would provide about 2,700 pair-wise choice observations for the WTP model.
Ideally, we would have obtained WTP information from a randomly drawn sample of the U.S. population because it is the population of interest when we consider the benefits of reducing injuries to national monuments. Realistically, our sample frame needed to be restricted to a fairly uniform population because the sample size limited the number of demographic variables we would be able to include in the WTP model, and because the in-person group survey mode limited the number of locations we could include.
We initially considered sampling from the urban population in the northeastern United States because we planned to include questions to obtain valuation data for local marble monuments, which tend to be concentrated in the northeast. However, as noted in Chapter 2, we found in the Boston focus groups that people tended to be less willing to pay to for preserving local monuments than national monuments, so we opted to drop that section from the already lengthy survey.
Nevertheless, we retained the idea of sampling from the urban, northeast population because we had conducted focus groups in Boston. A completely random, multistage sampling effort to select multiple survey locations was not feasible. Consequently, we selected two of the largest metropolitan areas, Boston and Philadelphia, for our survey locations because they offered logistical advantages, and citizens in these cities were assumed to have knowledge and attitudes that would be representative of other urban areas in the Northeast.
Because we had conducted focus groups in Boston, we selected it as our first city. Philadelphia was our second city, since it is the fourth largest metropolitan area in the region, and a location with local Hagler Bailly support services. We decided against surveying in cities located near Washington, DC to avoid potential bias should someone recognize that the "current" condition photos had been digitally enhanced.
In each city, we selected two survey locations to improve the representativeness of our sample. Because our schedule limited us to two days in each city, two was the maximum feasible number of locations. Assuming that we could recruit from up to a 10-mile radius around a survey location without significantly reducing attendance due to travel time, two locations provided ample coverage of the metropolitan areas. In the Boston metropolitan area, the sites were Dedham (south of Boston) and Woburn (north of Boston). The Philadelphia locations were downtown and Ft. Washington (northeast of Philadelphia). We selected locations that were far enough apart to maximize sampling coverage of the metropolitan area when we sampled in a 10-mile radius around each site.
We held 4 survey groups at each location for a total of 8 groups per city and 16 overall. We selected this number of groups to keep the average group size below 20; actual group size ranged from 7 to 31. Groups at each location were held at 11:00 a.m., 2:00 p.m., 5:00 p.m., and 8:00 p.m. to avoid potential sampling biases of holding surveys at only one time of day. The 11:00 a.m. and 8:00 p.m. groups were consistently the largest.
Randomly-selected households were contacted and an adult in the household was asked to participate in a survey about a national policy issue being conducted by researchers at the University of Colorado. The adult could be the person answering the phone or any adult in the household if someone under 18 answered the phone. He or she was told that the survey would take about two hours and were offered $50 to compensate them for their time. Regardless of whether they agreed to attend, we tried to obtain general demographic information about them and their households to compare participants to nonparticipants Appendix B contains the screening survey.
Individuals who agreed to attend a survey group received a follow-up letter on University of Colorado stationery with information on how to reach the survey location. The letter also reminded them of their survey schedule, the survey topic, and the $50 in cash they would receive at the survey. It provided them with a toll-free number to call if they had additional questions or needed to change their survey schedule. Recruited individuals received a final reminder phone call one or two days before the survey.
The original sample size was 3,653. After dropping households having ineligible participants (e.g., people who had recently participated in an in-person survey) or nonworking phone numbers, the adjusted sample was 3,110. Of those, 908 completed our telephone screening survey, and 409, or 45%, agreed to attend a survey group. Of the remaining 2,202 households contacted, 274 refused to complete the screening survey and attend the group survey. The remaining could not be contacted within six calls (509), could not participate because of language barriers or schedule conflicts (131), or were "potential refusals" (1,207). Potential refusals are people who hang up without specifically refusing to participate; e.g., they may say they are too busy to talk on the phone and hang up. The survey center later attempted a second call to potential refusals, who were either than classified as firm refusals or survey recruits, depending on their response. The 1,207 potential refusals are households that had not been contacted a second time before the recruiting target was met.
Table 4-1 summarizes the sampling statistics across the four survey locations. The show rate of 66% is in the range of typical show rates (60% to 75%) for recruited surveys conducted by the Hagler Bailly Survey Center.
The survey sessions were held in hotel meeting rooms that were large enough to accommodate 30 people who were seated in rows. The "classroom" style seating was chosen to minimize interaction between the participants.
As participants arrived at the survey session, they signed in and received sealed survey response packets. Each survey had an ID number, which was recorded next to a participant's name as she or he signed in. Participants were directed to follow the moderator's instructions before opening the packet or viewing any survey materials.
As noted in Chapter 3, the survey began with the moderator's presentation of the background information on monuments, injuries, and preservation options. Throughout the presentation, participants answered questions in their response booklets, and viewed visual materials that illustrated various concepts such as injuries and an injury time line. The presentation ended after the practice choice experiments. Participants completed the 10 pairwise choice experiments, payment card questions, follow-up questions, and socioeconomic questions on their own. As participants finished their surveys, the moderator checked their response booklets for incomplete answers before paying them the cooperation fee.
On average, the sessions lasted about one and a half hours. Respondents generally did not have difficulties following the presentation and response booklet materials. Nor did they exhibit difficulties in answering the attitudinal and valuation questions. Occasionally individuals had questions about the material or questions to clarify what they should do in the response booklet. To reduce the possibility of third party influences, no questions were asked or answered out loud. The moderator answered all questions in private, and answers were based on the information that had already been provided to the group so no respondent had additional information.
|4.4||Study Sample Characteristics and Representativeness|
As our first test of our sample's representativeness, we compared its demographic characteristics to the known characteristics of people who were contacted but not recruited (nonrecruit sample). This comparison indicates whether people who attended a survey session tend to have different characteristics than people who would not attend. About 900 people completed the recruiting survey, of which 272 attended a survey session. Table 4-2 shows that our sample and the nonrecruit sample tended to have similar gender and racial characteristics. However, the nonrecruit sample had higher percentages of people in the 23 to 34 age group and the 65 and older age group, and the survey sample had more people in the 45 to 54 and 55 to 64 age groups. Income distributions are similar, except that the survey sample has a higher share of people in the over $65,000 category and the nonrecruit sample had a higher share of refusals. Finally, the survey sample tends to have higher educational attainment levels than the nonrecruit sample. This comparison indicates that our WTP may be biased if they are a function of age, income, and educational attainment.
Because some people we contacted refused to answer the demographic question in the recruiting survey, we compared the characteristics of our study sample to census data for the study population (i.e., the population in the 10-mile radius sampling areas). We also compared the sample to the urban northeastern U.S. population1 and the total northeastern population to determine how demographically representative it is of those populations.
For these comparisons, we used the study sample characteristics reported in the in-person survey, and used supplementary data from the telephone recruiting survey when in-person survey data were missing. Table 4-3 reports demographic characteristics for the sample and the three comparison populations: the study population, the urban northeastern population, and the total northeastern population. Sections 4.4.1 and 4.4.2 discuss how comparable the sample is to the study population and the urban and total northeastern populations, respectively.
|4.4.1||Comparison of Sample to Study Population|
As mentioned above, the study population from which the sample was drawn consists of four recruiting areas with 10-mile radii that were centered on the survey locations in Dedham, Massachusetts; Woburn, Massachusetts; Philadelphia, Pennsylvania; and Fort Washington, Pennsylvania. A comparison of demographic characteristics indicates that the randomly chosen sample is fairly representative of the study population with respect to gender; 56.1% of the sample was female and 52.8% of the study population was female. With respect to race, the sample has about a 10% higher proportion of white respondents than the study population.
The sample has a higher concentration of respondents in the 35 to 64 age range than the study population. About 65% of the sample was in this age range compared to about 41% of the study population. The sample primarily tends to underrepresent the 18 to 34 age group, which makes up about 24% of the sample, but about 40% of the population. The sample also underrepresents the over 65 age group, but by a smaller amount, and contains no person over 85, which accounts for about 2% of the study population.
Our sample's tendency to overrepresent the 35 to 64 age group probably explains why household income and educational attainment levels tend to be higher for the sample compared to the study population. Because ages 35 to 64 are generally the high income earning years, the households represented by these individuals probably have higher incomes on average than households represented by a person in either the 18 to 34 age range or the 65+ age range. Similarly, people aged 35 to 64 tend to have higher educational attainments than people aged 18 to 34, who might still be in school, or people who are over 64, who might not have had the same educational opportunities as the 35 to 64 group.
Based on this comparison, we conclude that the WTP estimates for our sample will tend to be biased if WTP depends on race, age, income, and educational attainment.
|4.4.2||Comparison of Sample to Urban Northeast Population and Total Northeast Population|
Generally the conclusions about how representative the sample is of either the urban or the total northeastern population are similar to those above. This is because the study population has demographic characteristics that are fairly similar to characteristics of both northeastern populations. The only notable difference is that both the urban and total northeast populations tend to have a higher proportion of white individuals than the study population. Consequently, the study sample has race demographics that are more comparable to the urban and total northeast populations than the study population. If WTP depends on race, then the study sample tends to generate less biased estimates for the northeastern populations than the study population.
|4.5||Data Entry and Completeness|
The surveys were returned to Hagler Bailly for entry into the response tracking system. The data from the surveys were coded using a code book (Appendix C) and then edited to check for errors or missing responses and to code open-ended responses.
After the coding, the responses in each completed survey booklet were entered in a data file using the SPSS/PC+® Data Entry Program. This program automatically checked for out-of-range values as the data were entered. The program also used double-entry verification, which requires two independent data entry steps for each completed survey booklet, which are compared for data-entry mistakes. This verification process ensures 100% accuracy in data entry.
After the data were entered, SPSS For Windows® (Release 6.0) was used to review the basic frequencies and cross-tabulations as a final check on the accuracy of the data set. Any inconsistencies or impermissible variable combinations, such as invalid skips or codes discovered during the data cleaning process, were checked against the survey response booklets, and the data base was corrected as necessary. The data were then converted into SAS® format for analysis.
All of the 272 surveys were completed, meaning no respondents left mid-survey. However, some respondents failed to answer selected questions. We evaluated the completeness of the data by reviewing the number of missing or refused responses for each question. These responses usually did not exceed 10 for any given question. As mentioned earlier, missing demographic data were supplemented by data collected during the telephone recruiting survey. These adjustments do not significantly affect the mean responses. The adjustments were made so that individuals would not have to be dropped from the sample used for modeling.
1. According to the 1990 Census, the northeast includes the following states: Vermont, Maine, New Hampshire, Massachusetts, New York, Rhode Island, New Jersey, Connecticut, and Pennsylvania.
Contact me via email | Document Index | Home Page | Top of Page
Last Update: 1-12-98