A typology of Housing Search Behaviour in the Owner-Occupier Sector



Download 13.27 Mb.
Page22/60
Date30.04.2018
Size13.27 Mb.
1   ...   18   19   20   21   22   23   24   25   ...   60



5.3.2 Piloting the survey


Surveys have been habitually piloted since the 1940s to “determine whether problems exist that need to be addressed prior to putting the production survey in the field” (Rothgeb, 2008, online), normally problems relate to the internal validity of the questionnaire (Persaud, 2010). Pilots may, for example be used to test the logic of the survey, to verify that the survey’s measurement methods are appropriate, to test the variability of responses within a defined type of respondent as well as to understand whether those surveyed have the time and inclination to complete the survey with appropriate levels of care. The various means of undertaking a pilot are more or less useful in answering these tests. The piloting of a survey should not be confused with a pilot study, which seeks to gather some empirical insight into the phenomena investigated (perhaps as a means to prove an initial concept is worth pursuing in a larger scale project) (van Teijlingen and Hundley, 2001). Whilst this difference appears clear initially the boundaries between the two may be less obvious in practice. The main emphasis here of piloting the survey is to test its validity, although as can be seen below (in the section on Visual monitoring of survey completion) the responses of pilot interviewees had an impact upon the survey design, and hence could be seen to amend (or improve) the questions used in the research. The survey was piloted in three ways, discussed further below: critical expert review; visual monitoring of survey completion; and a full postal sample pilot.

Critical expert review

This technique normally involves an ‘expert’ reviewing the survey to analyse its logic and verify the use of measurements for particular questions. It is routine to use expert reviews alongside other forms of pilot, given experts are often not representative of the sample population, but do have knowledge of typical failures of surveys (Brancato et al, 2006). Two advantages of this are low economic cost and the speed with which responses can be returned to the survey designer. To an extent the success of this method relies on the ability of the ‘expert’ and their understanding of the subject and audience to be surveyed. Whilst this was undertaken informally, through critical comments on drafts by two senior academics within the Department of Town and Regional Planning, the ability of the testers to project how respondents with very different backgrounds and experiences in the housing market would answer the questions is limited and therefore was used primarily in the first stages of survey design rather than a formal pilot, and given these issues was used to compliment the cognitive interviews (for more on the relationship between expert reviews and cognitive interviews see Dillman and Redline, 2004).

Three significant changes were made to the survey after the critical reviews. First, the order of questions was changed. Questions about the household characteristics were moved from being the first set of questions to being the last set of questions. The rationale provided was that respondent fatigue will occur later in the survey, and therefore asking questions which would be less taxing for them (e.g. age) should occur later in the process.

Second, the language used throughout the survey was critiqued through the review. Technical language (such as heuristics) was replaced with language that should be more accessible (e.g. search process).

Third, the design of the survey was refined. Section boxes were inserted to visualise the breaks between stages in the search process and between sections of the survey.

Visual monitoring of survey completion (Cognitive Laboratory Interviews)

Once the critical expert review had taken place and the survey refined, it was then tested on potential participants. Asking respondents to complete the survey in front of a monitor is an effective means of testing how well people understand each question (Fowler and Cosenza, 2009), and is particularly important for respondents who may struggle to answer the questions (Jobe and Mingay, 1990). By attempting to see hesitation or mistakes that are subsequently corrected and questions ignored until the end of completion all help the survey designer to improve the language used in the survey and the logic ‘flow’. This technique is often used with an ex-ante interview to enable the monitor to ask why the respondent struggled at certain questions. This method is very helpful especially when the post-survey interview also asks respondents to reflect on why they completed the questions in that way (e.g. to test likert items).

Two interviews were conducted with two participants, divided into two parts. The first part of the interview was open, in which the interviewee was asked to describe their approach to searching for and purchasing a property for owner-occupation and to describe why they searched in that way (for more on the methods of cognitive interviewing see Willis, 2004). This approach allowed participants to describe in their own words their experiences. The second half of the interview focussed on clarification of the survey questions. Interviewees worked through the survey in detail, considering each question in turn and explaining what they thought the question meant and how they would undertake answering it26. Simon played a prominent role in the expansion of the use of cognitive interviews, arguing (in a paper with Anders Ericsson, 1980) that the process of verbalizing does not necessarily produce an alternative thought process, and therefore participants’ behaviour in the interview may be comparable to participants in completing the survey.

The two interviewees were selected not from the sample, but instead from the researcher’s personal contacts with recent movers. Personal contacts, rather than address based random sampling allows selection based upon respondent characteristics rather than dwelling characteristics (e.g. data from the household location or data available through Land Registry). This enabled the selection of two households with distinct characteristics, an individual male of British ethnicity who purchased a three bedroom terraced house in S6, and a mixed ethnicity couple with a child who purchased a one bedroom flat in S11.

From these interview it was apparent that the questions about search intensity (initially several Likert items) were inadequate, as they were not well understood by the interviewees. These questions were therefore replaced with the more straightforward questions about the number of properties viewed. These questions, when combined with the questions on the length of search, will provide an insight into the intensity of physical viewings.

Full postal sample pilot

Once the refinements had been made after the cognitive laboratory interviews, the third pilot method is to run the survey in conditions as close as possible to the expected final version as possible, it is in essence a full dress rehearsal for the main sample (Presser et al, 2004). This allows respondents as close to the sample population as possible to respond to the survey, and therefore resembles problems that the sample population will face in completing it appropriately than ‘critical friends’. This method provides some evidence of question fatigue that may not appear in settings other than the proposed setting for completion (Persaud, 2010). For example respondents may complete answers differently (less care, or more honestly etc) when answering the questions on the survey at their dwelling (through a postal survey) than respondents completing it in front of a ‘monitor’. One disadvantage of this method is its inability to question why a respondent completed the survey in a particular manner. The pilot survey is normally based on a much smaller percentage of the study population and therefore may have little or no significance statistically. Whilst this may not appear as a problem (as their results are not scrutinised or necessarily taken as representative of the population), care should be taken with assuming that the overall sample population will respond to the questions in a similar fashion. In this instance a pilot survey therefore needed to be distributed via postal mail.

50 pilot questionnaires were sent out to households selected at random to verify the layout and wording of the questions. The pilot questionnaires were sent out on 20th October 2011, using second-class postage. The envelope included an addressed and Freepost stamped return envelope, so that no monetary costs should be incurred in the sampling, and to increase the response rate (Yammarino et al., 1991; Bryman, 2008).

The addresses chosen to receive a questionnaire were selected at random from the Land Registry database, covering the sale of all dwellings registered with the Land Registry in Sheffield in 2010. The database was input into SPSS 19 and the random case selection tool was used to derive 50 addresses. The addresses were not weighted, as the purpose of the pilot was not to gather a representative sample, or even to ensure a particular absolute level of responses from a particular group, rather, simply to provide a series of responses that could identify weaknesses in the questionnaire. It is unclear whether or not a particular subset in society would provide responses that were more helpful for analysing the quality of the questionnaire, and were not therefore weighted in their favour.

The introductory letter on the pilot questionnaire asked for a response by 5th November 2011, two weeks after the date that the pilot was due to arrive at the address. Six responses were received within that timeframe; none were received after the deadline. The total response rate therefore for the pilot questionnaire was 12%. This response rate is slightly lower than some other housing surveys. It is unclear why the survey response rate is lower than others, although the short time frame and length of the survey may be contributory factors (Bryman, 2008). Given the extensive requirements of the research design for data from different stages of the search process the overall length of the survey was not reduced as this would hinder achieving the research objectives.

The pilot survey responses were not included in the overall number analysed as part of this research. No significant changes were made to the survey as a result of the pilot survey.


5.3.3 Survey Implementation


After testing the survey design through critical expert review, observation interviews, and the postal pilot, the questionnaire was distributed. The addresses identified by the Land Registry as residential properties that were registered as changing ownership in 2010 (sample frame), and provided by Sheffield City Council, were used to select the sample addresses. The target population is every household that purchased a dwelling for owner-occupation in 2010 in Sheffield. It is not possible to identify households that purchased a dwelling through secondary sources, however from the HMLR database the dwellings that have been purchased can be identified, and therefore act as a proxy (the sample population)27.

This method for selecting addresses provides a reliable method for identifying residential properties that have exchanged ownership. Despite the property changing ownership, there is no certainty that the residents at the selected address are the owners of the property. Buy to let properties are unidentifiable from the HMLR records and these properties are therefore included in the sample. A question about the tenure of dwellings in the survey was included in order to be able to remove respondents who were not owner-occupiers. The data records the changing of ownership of a property, and whilst the majority of dwellings change hands through a market exchange, some households change hands through other mechanisms (e.g. inheritance). The HMLR data includes price paid information, therefore it is possible to remove addresses that have very low-recorded transaction prices, based on the assumption that these are non-market transactions. Dwellings exchanged for less than £15,000 were removed; this resulted in ten dwellings being removed. Despite these limitations in the data, HMLR data provides the most reliable source of property transactions in the UK.

A total of 4,843 dwellings, which were recorded by HMLR, were deemed to have been sold on the market in 2010, and therefore comprise the sampling frame. A sample was selected from the sampling frame using a simple random sample to derive the addresses to send the survey to. This study uses inferential statistics from the sample to suggest the behaviour of the target population. It does not, however, argue that the research represents with complete accuracy the behaviour of all households, nor that the typology is based on all transactions, rather it infers behaviour from the sample to the target population. This is a limitation of the research, but is in line with most quantitative methods in social sciences, where analysing the behaviour of the whole population is not possible. The reason for using a random sample is outlined below against the context of other sampling methods.

No sampling method can guarantee that the target population is exactly represented (Blaikie, 2010). Convenience, stratified random and random sampling methods were all considered. Convenience sampling was disregarded, as it offers no clear advantage in selecting addresses for a postal survey given the equal costs in time and money in distributing the surveys regardless of address. The disadvantages of using a convenience sample, where differences in behaviour may relate to housing types that are unevenly distributed across the city, neighbourhoods, or household characteristics that vary with location are clearly evident.

Stratified random sampling could have been used to provide a clear division between the housing type and neighbourhood characteristics outlined above. Using secondary data (e.g. census) it would also be possible to stratify the sample based upon the household characteristics that most frequently occur within an output area (or larger geographical unit), but it would be uncertain that the specific addresses selected within the sample would necessarily represent these household types. Stratified sampling also represents a complexity when attempting to build a typology of behaviour from the data (e.g. through the use of principal component analysis which is described later). As it is not certain, ex-ante, whether the typology should be determined by the location, type or household characteristic it would not necessarily be clear how to stratify the survey based upon these characteristics, i.e. which element(s) should take precedent. Leishman (2003) suggests that stratified random sampling should be used in real estate research where the segmentation of groups is known, but not necessarily where it is not clear.

“One drawback to the stratified sampling method is that its use requires that we know, at the outset, the parameters that define segmentation. Sometimes that is not the case and we cannot therefore use quotas in the data collection phase of the research project. In these circumstances it is prudent to collect information on a range of parameters such as age, sex, marital status and so on. We can test for variation in responses with respect to these parameters statistically” (Leishman, 2003, P.41)

Random samples, of a large sample size, are likely to represent the overall population. In this study the target population is every household that purchased a dwelling in Sheffield in 2010 for owner occupation. Given the discussion above about the need not to pre-determine the selection criteria based on household or housing characteristics and the limitations of information about the household characteristics, a random sample gives a more intuitive possibility for selection than a stratified sample. One issue for random sampling is selecting a procedure to create the random sample, given that there is an inherent contradiction between ‘random’ and ‘process’ the method for selection can only approximate a random approach. For this SPSS was used to select a random sample, this in essence is an automated approach to random selection and follows an approximation rather than a strict random selection.

4,000 addresses were randomly selected to receive the postal survey (wave A). The surveys were distributed in envelopes using second-class postage. Inside the envelope was the paper survey and an addressed, Freepost stamped return envelope with the university Freepost address (see Appendix D for the envelope). The survey did not include a monetary reward for completion or entry into a competition28.

The survey response rate for wave A was 10%, with 399 responses received.

In order to increase the number of responses a second wave of surveys was distributed. Second waves of survey distribution can either be sent to the addresses of households who have already been sent a survey, or to addresses that have not yet been sent one. 1,000 more surveys were distributed, of which 843 were to addresses that had not yet been sent a survey (thus after wave B, every address in the target population had been sent the survey), and 157 to randomly selected addresses that had already previously been sent a survey (respondent addresses were removed from the database prior to selecting the wave B addresses, therefore would not be re-sampled).



Survey response rate

The response rate as a proportion of the whole population of dwellings sold in 2010 is 9.7%, at 469 responses. This reflects that survey response rate, when waves A and B are combined, as all 4,843 dwellings sold in 2010 were sampled.



Fig. 5.4: Map of all sold dwellings in 2010 and survey responses

Source: Author’s own, Sold property data from HMLR, 2010

There is some variation in the proportion of responses by variables that are known from the HMLR data set, or can be imputed from it. Tables 5.2 to 5.5 show the population proportions (of all dwellings sold in 2010) and the proportion of all responses for location, month of completion, new property, and type variables. Fig. 5.1 shows a map of all sold dwellings in 2010 (the target population and addresses of all distributed surveys) and the survey responses. It is evident that some areas of the city had few dwelling transactions in 2010, and therefore market outcomes were spatially uneven. It is unknown whether the reasons for this variation in spatial outcomes are predicated on variation in dwelling locations (and tenure locations) or search behaviour. It is likely that the market was equally active for all price points given the impact of the Global Financial Crisis on employment and access to finance for mortgages. It is also evident from the map that there are some areas with transactions, but with very few survey responses.

The proportion of responses for different locations shows some large variations. The East, North and South East housing market areas are all underrepresented by more than 3%, whilst the South West and North West were both overrepresented by more than 3%. Of the four largest housing market areas, by the number of dwellings that changed ownership in 2010, two were over represented (South West and North West Urban) and two underrepresented (South East and City Centre). Broadly, the housing market areas in the west of the city had higher response rates than those in the east of the city. This may reflect socio-economics divisions between the two sides of the city, but proving causality is not possible from the response rates.



Download 13.27 Mb.

Share with your friends:
1   ...   18   19   20   21   22   23   24   25   ...   60




The database is protected by copyright ©sckool.org 2020
send message

    Main page