Research Methods Essay

Essays are generally scholarly pieces of writing giving the author’s own argument, but the definition is vague, overlapping with those of an article, a pamphlet and a short story. Essays can consist of a number of elements, inclRESEARCH METHODOLOGY
The chapter focused on the methods used by the researcher to collect data on the causes of poor performance in mathematics at secondary schools in Gokwe District central cluster. The chapter looked at the research design adopted, research instruments used, the population, the sample, the sampling technique, data collection procedures and data analysis. The chapter ended with a summary.
Walliman (2006) said questionnaires refer to a technique of data collection in which respondents are asked to respond to the same set of questions in a predetermined order.
In a questionnaire there are reduced chances of evaluator bias because the same questions are asked to all respondents. There are some people who feel at ease when responding to a questionnaire than being involved in an interview. According to Ary et al (2010; 384) a questionnaire guarantees confidentiality thus perhaps eliciting more truthful responses than would be obtained in an interview. Another advantage is that the tabulation of closed ended responses is an easy and straight forward process. A questionnaire is time saving since one obtains data from relatively large samples covering a wide geographical area. The researcher would also easily translate her objectives into specific questions.
In a questionnaire there is a low response rate since some people might decide not to complete the questionnaires. According to Fowler (2002) a low response rate reduces the sample size and may bias the results of the study. Ary et al (2010; 380) observes that in questionnaires personal contact is missing and people are more likely to refuse to cooperate.” Given this lack of contact with respondent the interviewer will never know who really completed this survey and this would reduce the validity and reliability of the results. The illiterate people are left out in a questionnaire hence this reduces the size and diversity of the sample. Ary et al (2010) argued that in a questionnaire respondents may misinterpret the question and in some cases the items may not have the same meaning to all respondents. This scenario is worsened by the fact that there is no room for probing and clarifications for additional details. According to Ary etal (2010 ) a questionnaire is inadequate in understanding emotions and it is difficult to tell how truthful a respondent is. Good questions are hard to write and they take considerable time to develop.
Leedy (2001) argues that interviews are a two way dialogue initiated by the interviewer to acquire information from the respondents. The interviewer used face to face interviews. Mahlase (1997) says, an interview refers to a situation in which answers are directly drawn out from the respondents by an interview and usually records responses. The researcher used structured interview guide .The interview enabled the researcher to gather comparable data in surrounding secondary schools under study.
According to Ary etal (2010; 380) there is a high response rate since personal contact increases the likelihood that the interviewee will participate and will provide the desired information.” The interviewer will get answers to all or most of the questions. There is room for clarification since questions can be repeated or their meanings explained. The interviewer can also press for relevant additional information when a response seems incomplete or not entirely relevant. In an interview the interviewer has control over the order in which the questions are considered since in some cases it is important for the respondents not to know the nature of later questions because their responses to earlier questions may influence earlier responses. Interviews supply large volumes of in depth data in a short space of time. Interviews provide first hand information and also insights on participants’ perspectives.
Interviews are time consuming and expensive compared to other data collection methods According to Ary et al (2010) interviews are prone to socially desirability bias in which respondents want to please the interviewer by giving socially acceptable responses that they would not give in any anonymous questionnaire. They may say want they think the interviewer wants to hear. The interviews are prone to interviewer bias since the interviewer may reward, correct or encourage responses that fit his or her expectations ( Ary et al 2010 ) through verbal and nonverbal cues. In an interview there is no anonymity hence the respondents might fail to cooperate fully for fear of victimization. An interview may seem intrusive to the respondent.
Delamont (2002) asserts that, an observation schedule is an analytical form or coding sheet filled out by researchers during structured observation. It carefully specifies beforehand the categories of behaviors or events under study and under what circumstances they should be assigned to those categories .Observations are then fragmented or coded into more managerial pieces of information which are later aggregated into usable quantifiable data.
The researcher gets a real picture of the behaviors and the events as they manifest in natural settings .Systematic and unbiased observations can yield a true picture of individuals ‘natural set of behaviors .Certain phenomenon can be accessed and properly understood only through observation ,for example interaction can be meaningfully assessed and understood only through observation .
The researcher has little control over the situation he or she is interested to observe .Croll (1986) observes that ,the presents of the researcher may influence the phenomenon itself .In other words those people under study may change their activities in the presence of the researcher .As a result the observer would fail to get a true picture of people’s behaviors that would have taken place ,if the observer would not have been present .At times the researcher has to wait until the appropriate event takes place ,hence the exercise will be time consuming and labor intensive .However the researcher choose this method so that she gets certain behaviors in their natural state .
According to Walliman (2006) population is the total number of individuals who fits the criteria the researcher has laid out for research participants. The targeted population was the teachers of mathematics, Heads of mathematics department, Heads of selected schools in Gokwe South central cluster and District officer of Gokwe south central cluster. The cluster sampling technique was used to select the schools. The characteristics of the students under study are that they are all doing mathematics at Ordinary level; they come from the same geographical place and have the same socio —economic background. The teachers too all teach pupils using the same mathematics syllabus, have almost the same economic status, they are all trained from teachers’ colleges that offer almost the same teacher education curriculum in terms of content coverage and they also have experience working with pupils of different backgrounds and abilities. Among the three schools chosen .one of the schools is a boarding school whilst the other two are day schools.
According to Walliman (2006) a sample size is the number of data sources that are actually selected from the total population. Therefore a sample is part of a population. The purposive sampling technique was adopted .Chiromo (2006 ) posits that ,purposive sampling is a judgemental form of sampling in which the researcher purposely selects certain groups of individuals for their relevance to the issue being studied .The researcher was interested in the Heads of the schools, Heads of the mathematics department , the teachers of mathematics and pupils learning mathematics at ordinary level. According to Palys (2008) purposive sampling involves the researcher’s judgment in selecting respondents that will best answer the research questions. It enables the selection of the key informants on the basis that they understand the challenges faced by teachers in the teaching of mathematics. According to Ary etal (2010) purposive sampling also known as judgment sampling, sample elements judged to be typical or representative are chosen from the population. The assumption is that errors of judgment in the selection will counter balance one another. It is useful in attitude and opinion surveys. Three Head teachers of schools, six mathematics teachers, three heads of the mathematics department and one District officer were interviewed and thirty pupils responded to questionnaires. The District education officer was automatically selected because of his or her responsibilities and relevance to the study. In selecting pupils for sampling, a hat method was used where by pupils were asked to write their names and put the names in a hat according to gender. A boy was picked by the researcher at random and was asked to pick five names of girls in the hat, one at a time likewise a girl was randomly chosen to pick five names of boys in a hat and the chosen pupils were the ones representing a sample at a school .Five girls and five boys represented a school. For anonymity purposes selected schools were named A, B and C.
The researcher upheld individual’s rights to confidentiality and privacy by ensuring participants that information gathered from them will be used for academic purposes only and names of the respondents will not be used.
The researcher made sure that the participants were informed that their participation was voluntary, they could withdraw at any time, choose not to answer certain questions and that withdrawal bears no consequences.
The participants’ identity was protected to make it impossible to link certain responses to certain names. The names of the participants were not written on the questionnaires. The participants’ identity was protected to make it impossible to link certain responses to certain names.
The researcher visited the Head Office of the ministry of Education and culture with an introductory letter from Midlands State University seeking permission to collect data from schools. From there she visited the District office to sought permission to visit the schools and proceeded to the schools to collect data. The interviews were conducted and the questionnaires were administered and collected immediately after completion. It took the researcher three days to complete the process of data collection.

In this study, data collected through interviews, questionnaires and observed schedule was analysed and the data was shown through tables and bar graphs .On the issue of reliability the researcher followed proper research procedures and observed the ethics. Anonymity on the questionnaires was highly observed so as to allow the respondents to give accurate information. As for validity the researcher made sure that the methods used complemented each other.
The chapter outlined the research methodology which was employed by the researcher. Under the discussion the following topics were dealt with, research design, research instruments, population, sample and sampling techniques, research ethics, data collection procedure and data analysis .The next chapter will look at data analysis and presentation of data collected .

Ary, D, Jacobs, L.C. and Sorenson,C. ( 2010 ) Introduction to Research in Education. ( 8th Ed ) Wadsworth. Cengage Learning
Fowler, F. J. (2002 ) Survey Research Methods. ( 3rd Ed ) Thousand Oaks, C. A. Sage.
Johnson, B. R. and Onwuegbuzie, J. A. ( 2004 ) Mixed methods researcher, Educational Researcher. Volume 33, ( 7 ) p 14-26
Leedy, P. ( 2001 ) Practical Research: Planning and Design, New York, Macmillan.
Morgan, D. L. ( 2008 ) Sampling. The Sage encyclopedia of qualitative research methods. Volume 1 and 2. Los Angeles. Sage Publication.
Palys, T. ( 2008 ) Purposive Sampling. The Sage encyclopeadia of qualitative research methods. Volume 1 and 2. Los Angeles , Sage Publication
Popper, M. (2004 ) Leadership as relationship. Journal for the theory of social behavior. Volume 34 ( 2 ) p 107-125
Walliman, N. ( 2006 ) Social Research methods. London, Sage
uding: literary criticism, political manifestos, learned arguments, observations of daily life, recollections, and reflections of the author. Almost all modern essays are written in prose, but works in verse have been dubbed essays (e.g. Alexander Pope’s An Essay on Criticism and An Essay on Man). While brevity usually defines an essay, voluminous works like John Locke’s An Essay Concerning Human Understanding and Thomas Malthus’s An Essay on the Principle of Population are counterexamples. In some countries (e.g., the United States and Canada), essays have become a major part of formal education. Secondary students are taught structured essay formats to improve their writing skills; admission essays are often used by universities in selecting applicants, and in the humanities and social sciences essays are often used as a way of assessing the performance of students during final exams.

Qualitative and quantitative methods in research on essay writing: no one way.

James Hartley and Kathryn Chesworth

Department of Psychology, Keele University, UK

Paper presented at Higher Education Close Up, an international conference from 6-8 July 1998 at University of Central Lancashire, Preston. This conference is jointly hosted by the Department of Educational Research, Lancaster University and the Department of Education Studies, University of Central Lancashire and is supported by the Society for Research into Higher Education

Correspondence to:Prof. James Hartley,Department of Psychology, Keele University,Staffordshire ST5 5BG, UK. e-mail:


In this paper the results from two studies on essay writing are contrasted. One uses a qualitative method and the other a quantitative one. The qualitative study is rich in detail but, for those of a quantitative disposition, it lacks sufficient quantitative information. We are not told, for example, what proportion of the students involved are men or women, traditional-entry or mature, and what disciplines they are studying. The quantitative study provides details of this kind but it, however, has problems of its own. Internal inconsistencies in the study reveal that the validity of some of the findings is questionable. The paper concludes by suggesting the necessity for combining - or sequentially chaining - different methods in research of this kind.


Qualitative studies in psychology can be fascinating and insightful but they may leave readers with a quantitative disposition worrying about the generality of their findings. Quantitative studies, on the other hand, whilst providing data from larger and more representative samples, seem more mechanical and arid to qualitative researchers. But both methods have advantages and disadvantages (see e.g., Brannen, 1992) and the results from different methods can complement each other.

This paper considers these issues in the context of studying the problems that students face in essay writing. It contrasts the recent results from a qualitative study (Street and Lea, 1997) with those from a quantitative one (described in this paper). At the time of writing we have copies of four of Street and Lea's articles in front of us (Lea and Street, 1996a; 1996b; 1996c and Street and Lea, 1997). Only in the major one are we told how many students, lecturers and institutions were involved in their study:

'The analysis of the research data has concentrated on the differing interpretations and understandings of academic staff and students with regard to academic writing within two contrasting university settings. 10 interviews were conducted with staff in the older university and 21 students were interviewed, either individually or in small groups. At the new university 13 members of academic staff and 26 students were interviewed.' (Street and Lea, 1997)

But these are the only numerical data given. We are not told, for example, what proportion of the students are men or women, traditional-entry or mature students, and what disciplines they are studying. And there are no quantitative data to qualify their results.

The actual results are of great interest. Street and Lea differentiate between three different, but overlapping, sources of difficulty for students writing essays and reports, and how their institutions deal with them. These are:

  1. Difficulties with 'deficits' - e.g., grammar, spelling, punctuation and style. These are mainly dealt with by study skills courses, and handouts/guides for students.
  2. Difficulties with 'interpretation' - e.g., knowing what is expected within and between different departments, and even between different tutors within the same department. These are largely ignored by many tutors, but students' ignorance of these matters is criticised in their feedback.
  3. Difficulties stemming from 'institutional failings' - e.g., the inability of staff to mark written work within a reasonable time span and with sufficient detail because of limited staff resources and the large numbers of students. These difficulties are ignored, defended or deplored by different members of staff.

Street and Lea place their greatest emphasis on the second area of difficulty - one that has not really been explored in previous research on essay writing (although see Hinkle, 1997). But are they right to do so? How widespread are the difficulties faced by their 47 students, and which do they find the greatest burden? We cannot answer questions such as these because of the qualitative nature of their reports. Nonetheless, the findings, the commentary and the suggestions are so interesting that we set out to gather some more - quantitative - evidence to try to build upon this pioneering study.


We devised a questionnaire on essay writing that addressed the three concerns discussed by Street and Lea. Thus there were questions about students' experiences of difficulties connected with 'deficits', 'interpretation' and 'institutional failings'. The questions were largely based upon the comments and discussion provided in Street and Lea (1997).

We gave this questionnaire to 102 second-year psychology students attending a lecture at Keele University at the beginning of their first-semester. These students were asked to complete the questionnaire with reference to the difficulties that they might have faced when writing their essays and reports in their first-year at Keele. Students at Keele typically study two principal subjects and two subsidiary subjects in their first-year, so they would have had to write essays/reports in four different subject matters during the year. (It is this kind of complexity that Street and Lea had in mind in their study and which has not been commented on before.)

In our study the responses from two overseas students who had not been at Keele in their first-year, and from six students studying conductive education (who did not follow a normal subsidiary programme) were deleted, making a total of 94 respondents. (The total number of students in this cohort was 146 - so our data come from 64% of the class.)

Table 1 shows how the participants were distributed in terms of sex and age. It is clear, like most studies with psychology students today, that there are approximately three times as many women as men, and a greater preponderance of women mature students. (Street and Lea, of course, provide no such comparable data.) In fact we divided our students into three age-groups since previous work carried out at Keele (Trueman and Hartley, 1996) showed that significant differences between the performance of 'traditional-entry' (18-20yrs) and mature students (over 21yrs) manifested themselves more clearly when the 'mature students' were subdivided into two age-groups ('borderline mature' 21-24yrs and 'older mature' students aged 25 and over).


Preliminary analyses of the results showed, in fact, that there were sex differences on only two items in the questionnaire, and age differences on a further two. So here we present the data from the overall sample of 94 students. (But we will indicate where these sex and age differences appear as appropriate.)

Questions to do with 'deficits'

The students were asked to respond 'Yes', 'Sometimes', or 'No', as appropriate, to each of four items asking them if they experienced any difficulties with writing skills or 'deficits'. The percentages of the students responding to these items were as follows:

On two of these items, difficulties with punctuation and grammar, the mature students over 25yrs reported significantly less difficulty than did the traditional-entry students (2, d.f.2 = 7.04, p<.05, and 8.27, p<.02 respectively).

Questions to do with 'interpretation'

Here the students were asked to indicate 'Yes, 'Sometimes' or 'No' to 12 items asking them whether or not they had experienced any difficulties interpreting what was required. The percentages of the students responding to these items were as follows:

(The women students reported that they experienced difficulties 'Sometimes' significantly more often than did the men students on the item about sequencing the content of their essays.)

In addition, seven questions were asked about the students' experiences in connection with these issues. The percentages responding to these items were as follows (N.B. there were some missing data here):

Questions to do with 'institutional failings'

The students were asked to respond, 'Yes', 'Sometimes' or 'No' to seven items about 'institutional failings' - as far as they were concerned. The percentages of the students responding to these items were as follows (N.B. not all students responded to the last two items):

(For one reason or another the women students reported significantly more 'No' responses than the men to the item asking about whether or not their written work was marked by a postgraduate student. It is possible that this reflects the point that many students might not know the answer to this question, and indeed this is supported by the fact that eight students did not respond to this item.)


The overall results (in percentages) from the three categories of difficulties were thus as follows:

These results suggest that students experience most difficulties with their 'deficits' and fewer but about equal difficulties with their 'interpretations' and 'institutional failings'.

But now we come to a difficulty with questionnaire data. To what extent can these results be accepted? For example, we were greatly surprised to find that 80% of our students claimed that they had not received any guidance in the form of handouts/handbooks, when we knew that all of them had been provided (i) with a student handbook in their first year - which had a section on essay and report writing in it, and (ii) had been encouraged to buy a specific booklet entitled 'How to Write a Lab Report'. It is possible, of course, that the students did not realise that their Handbook had such a section within it, and that few of them bought the booklet, but this form of argument seems a little weak. Additional evidence is required.

Furthermore, and perhaps more importantly, it is possible that it is easier for students to admit to difficulties that might be thought of as trivial - such as spelling - than it is to admit to others that might be thought of as serious - such as not knowing what to do. And, in terms of Street and Lea's analyses, students may have found it easier to use this language, even though it does not get to the heart of their difficulties. (See also Lea and Street, 1998.)

Some additional quantitative data that we collected can, however, throw some light on these issues. The questionnaire covered two sides of a sheet of A4 paper, and at the bottom of each side, we asked the students to 'underline the one (or two) items above' that caused them ' the greatest' difficulty.

Page one of the questionnaire had items on 'deficits' and 'interpretations' and page two had the items on 'institutional failings'. The items most frequently underlined on page one were:

28% Difficulties with knowing what was wanted.

16% Difficulties with knowing what to read.

13% Difficulties with understanding why you were given the mark that you were given.

10% Difficulties with organising the structure of the material.

Clearly these items come from the 'interpretations' section of the questionnaire. The most frequently underlined items from the 'deficits part were:

06% Difficulties with spelling.

06% Difficulties with referencing/providing bibliographies.

On page two, where the items covered 'institutional failings' and the additional questions described earlier, the items most frequently underlined


44% Difficulties with lack of appropriate materials in the library.

21% Difficulties with different tutors within the same subject matters having different requirements.

17% Feedback taking longer than three weeks.

12% Unhelpful feedback

12% Varying quality of feedback

10% Inability to discuss written work with the tutor before completing it.

These findings suggest that the difficulties of 'interpretation' and 'institutional failings' caused greater problems than did difficulties caused by 'deficits'. It is clear, then, that the picture of difficulties generated by the underlining method is different from that provided by the questionnaire method. Indeed, the results conflict at one point when the questionnaire data suggest that only 16% of the students had problems with the library, but the underlining method suggests that this figure is 44%. Following the underlining method it is tempting to suggest that the difficulties with the 'institutional failings' were the greatest source of problems for our students.

Whatever the case, it is important not to forget the main findings of our questionnaire. The results have suggested that in each area of difficulty - 'deficit', 'interpretation' or 'institutional failing' - over two-thirds of our students have admitted to difficulties in connection with their written work in their first year - even if this is only 'Sometimes'. Similar findings have been reported for 'deficits' by others (see Hinkle, 1997; Robertson, Keating and Cooper, 1998; Winch and Wells, 1995). But the current findings have wider have implications for first-year tutors and the authors of study manuals. Our results, (and those of Street and Lea, and Hinkle) all point to the fact that staff need to pay more attention to resolving 'institutional failings' and helping students with their problems of 'interpretation' than they currently do.


We have argued by implication in this paper that issues arising from qualitative studies need to be followed up by quantitative ones. We acknowledge that this is by no means a new argument. According to McLeod (1994) such 'triangulation' of methods has been advocated since the 1930s. Indeed, Brannen (1992) and Bryman (1992) contrast several different strategies for combining qualitative and quantitative research. Some researchers (e.g. Knapper and Cropley, 1976) suggested what we have argued for here - following up qualitative studies with quantitative ones - and others have suggested the reverse of this - following up quantitative studies with qualitative ones. Jacobs (1996) for example, in the context of higher education, found that using a qualitative approach helped to clarify the results obtained from an earlier quantitative study of library usage at the University of Sussex.

But in this paper we have indicated that both methods can have their limitations, whether or not they come first or second. Perhaps it would be best if each method fed into the other in a sort of sequential reflective chain or spiral. In terms of the present study, however, we now need a further qualitative approach to try and tease out the explanations for the failings in our quantitative results.

Acknowledgments. We are grateful to colleagues at the University of Keele and to Mary Lea and Brian Street for helpful criticisms of an earlier draft of this paper.


Brannen, J. (Ed.) (1992). Mixing Methods: Qualitative and Quantitative Research. Aldershot: Avebury.

Bryman. A. (1992). Quantitative and qualitative research: further reflections on their integration. In J. Brannen (Ed.) Mixing Methods: Qualitative and Quantitative Research. Aldershot: Avebury.

Hinkle, A. (1997). Transcriptional and compositional responses to student writing: Designing courses with social climates supportive of written expression. In C. Rust and G. Gibbs (Eds.) Improving Student Learning: Improving Student Learning Through Course Design. Oxford: Oxford Centre for Staff and Learning Development (Oxford Brookes University).

Jacobs, N. A. (1996). Students' perceptions of the library-service at the University of Sussex - practical quantitative and qualitative research in an academic-library. Journal of Documentation, 52, 2, 139-162.

Knapper, C. & Cropley, A. (1976). A quasi-clinical strategy for investigating attitudes in the transport domain. In P. Stringer and H. Wenzel (Eds.) Transportational Planning for a Better Environment. New York: Plenum.

Lea, M. & Street, B. (1996a). Perspectives on academic literacy. Exchanges: Learning and Teaching at the University of North London. 2 June, pp 4-5.

Lea, M. & Street, B. (1996b). Academic literacies. Learning Matters, No 3, Summer, pp 2-4.

Lea, M. & Street, B. (1996c). Student writing and faculty feedback in higher education: an academic literacies approach. In G. Kress (Ed.) Domains of Literacy. London: Institute of Education.

Lea, M. & Street, B. (1998). Student writing in higher education: an academic literacies approach. Studies in Higher Education (in press).

McLeod, J. (1994). Doing Counselling Research. London: Sage.

Robertson, C., Keating, I. and Cooper, B. (1998). 'I don't seem to have done very much work on English Grammer (sic) at all'. A study of the written English skills of first year undergraduate students: their perceptions of the reality. Journal of Further and Higher Education, 22, 5-14.

Street, B. & Lea, M. (1997). Perspectives on academic literacies: an institutional approach. ESRC End of Award Report Ref. No. R000221557. Swindon: ESRC.

Trueman, M. & Hartley, J. (1996.) A comparison between the time-management skills and academic performance of mature and traditional-entry students. Higher Education, 32, 2, 199-215.

Winch, C. & Wells, P. (1995). The quality of student writing in higher education: A cause for concern? British Journal of Educational Studies, 43, 1, 75-87.

James Hartley is Research Professor in the Department of Psychology, Keele University, and author of several textbooks, including Designing Instructional Text (3rd edition) (Kogan Page) and Learning and Studying (Routledge). Kathryn Chesworth was a second-year psychology student at Keele when she assisted Professor Hartley with analysing the data presented in this paper.

This document was added to the Education-line database 26 June 1998

0 Replies to “Research Methods Essay”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *