This is part of the methodology tutorial.
Details of a given research cycle may considerably change within a different approaches. However, most research should start with a an activity that aims to define precise research questions through both a review of the literature and reflections about the objectives. We shall come back to these issues in this and later modules.
For a given research question, you usually have to do:
Now let's start looking at research objectives, i.e. the starting point of a research project. The essence of your research objectives should be formulated in terms of clear research questions.
Research questions are the result of:
Everything you plan to do, must be formulated as a research question !
See Methodology tutorial - finding a research subject, where we will elaborate this question in some more detail
Under the umbrella "conceptualizations" we define a whole lot of intellectual instruments that help you organize theoretical and practical knowledge about your research subject.
So one of your first task is find, elaborate and "massage" some concepts so that they can be used to study observable phenomena. Some of these concepts may globally determine how you look things, others are more like explanatory or to be explained variables.
We shall come back to this issue in the Methodology tutorial - conceptual frameworks and just provide a few examples here.
Analysis frameworks help you look a things.
One popular framework in educational technology research is activity theory which has its roots in what could be called soviet micro-sociology.
Quote: The Activity Triangle Model or activity system representationally outlines the various components of an activity system into a unified whole. Participants in an activity are portrayed as subjects interacting with objects to achieve desired outcomes. In the meanwhile, human interactions with each other and with objects of the environment are mediated through the use of tools, rules and division of labor. Mediators represent the nature of relationships that exist within and between participants of an activity in a given community of practices. This approach to modelling various aspects of human activity draws the researcher s attention to factors to consider when developing a learning system. However, activity theory does not include a theory of learning, (Daisy Mwanza & Yrjö Engeström)
Translation: It helps us thinking about the working of an organization (including its actors and processes) and therefore how to study it.
Such a framework is not true or false, just useful (or useless) for a given intellectual task !
Models and hypothesis are important in theory-driven research, e.g. experimental learning theory.
Example: Imagine a research question that aims to know whether there is a causality between teacher training and quality of teaching. In empirical research such a research questions would be formulated as a hypothesis that then can be tested.
An often head hypothesis would state the following:
In the picture below we can show the following principles:
We shall come back later to this central concept of operationalization. The point here was to show that hypothesis should be formulated firstly at a conceptual level. They they need to be rephrased to become operational, i.e. applicable to real data.
Before we further look into this operationalization issue, we need to introduce the "variance" concept. Variance means that things can be different, e.g. teachers can receive {none, little, some, a lot, ...} of training. So we got a variable "amount of teacher training" that can have different values (none, little, etc.). If we can find this variety of values in observations we have variance. If all teachers receive some training we don't have variance. Research needs variance. Without variance (no differences) we can't explain things.
Further more we need co-variance. Since empirical research wants to find out why things exist, we must observe explaining things that have "more" and "less" and how they impact on things to be explained that have "more or less". In other words, without co-variance ... no explanation.
Let's explain this principle with two examples
Imagine that we wish to know why certain private schools introduce technology faster than others. One hypothesis to test could be: "Reforms happens when there is external pressure". So we have two variables (and according values)
To operationalize these variables we use written traces for "pressure" and observable actions for "reforms".
Strategies of a school | ||||
---|---|---|---|---|
Type of pressure | strategy 1:no reaction | strategy 2:a task force is created | strategy 3:internal training programs are created | strategy 4:resources are reallocated |
Letters written by parents | (N=8) (p=0.8) | (N=2) (p=0.2) | ||
Letters written by supervisory boards | (N=4) (p=0.4) | (N=5) (p=0.5) | (N=1) (p=0.1) | |
newspaper articles | (N=1) (p=20%) | (N=4) (p=80%) |
N = number of observations, p = probability
The result of (imaginary) research is: Increased pressure leads to increased action. Data for example tells that:
Of course, such results have to interpreted carefully and for various reasons, but we shall come back to these validity issues in some other tutorial module.
Now let's address the operationalization issue in more depth, since this is a very important issue in empirical research.
A scientific proposition contains concepts (theoretical variables)
An academic research paper links concepts and and empirical paper grounds these links with data.
We got a real problem here ! How could we measure "pedagogical effect" or "collaborative learning" ? Finding good measures is not trivial.
There are 2 issues you must address if you want to minimize the (unavoidable gap) between theoretical variables and variables that you can observe.
Examples:
Examples from educational design, i.e. dimensions you might consider when you plan to measure the socio-constructiveness of some teaching:
Example from public policy analysis:
Example from HCI:
Review questions: Try to decompose a concept you find interesting or think about the usefulness of decompositions presented above.
Taylor and Maor (2000) developed an instrument to study on-line environments called the "Constructivist On-Line Learning Environment Survey (COLLES) questionnaire and it is available on-line.
This survey instruments allows to “to monitor the extent to which we are able to exploit the interactive capacity of the World Wide Web for engaging students in dynamic learning practices.”. The key qualities (dimensions) this survey can measure are:
Each of these dimensions is then measured with a few survey questions (items), e.g.:
Statements | Almost Never | Seldom | Some-times | Often | Almost Always |
---|---|---|---|---|---|
Items concerning relevance | |||||
my learning focuses on issues that interest me. | O | O | O | O | O |
what I learn is important for my professional practice as a trainer. | O | O | O | O | O |
I learn how to improve my professional practice as a trainer. | O | O | O | O | O |
what I learn connects well with my prof. practice as a trainer. | O | O | O | O | O |
Items concerning reflection | |||||
... I think critically about how I learn. | O | O | O | O | O |
... I think critically about my own ideas. | O | O | O | O | O |
... I think critically about other students' ideas. | O | O | O | O | O |
... I think critically about ideas in the readings. | O | O | O | O | O |
In the diagram below we shortly picture how one could envision to measure economic development with official statistics (only part of the diagram is shown).
Let's summarize this section on concept operationalization. There are a few issues you critically should think about.
With somewhat operationalized research questions (and that may include operational hypothesis) you then have to think carefully what kinds of data you will observe and also at which cases (population) you will look at.
Measuring means:
Sampling refers to the process of selecting "cases", e.g. people, activities, situations etc. you plan to look at. Theses cases should be representative of the whole. E.g. in a survey research the 500 person that will answer the questionnaire should represent the whole group of people you are interested in, e.g. all primary teachers in a country, all students of a university, all voters of a state.
As a general rule: Make sure that "operative" variables have good variance, otherwise you can’t make any statements on causality or difference.
We define operative variables as dependant (to explain) plus independent (explaining) variables.
Sampling in quantitative research is relatively simple. You just select a sufficiently large amount of cases within a given mother population (the one that your theory is about). The best sampling strategy is to randomly select a pool from the mother population, but the difficulty is to identify all the members of the mother population and to have them participate. We will address this issue again in Methodology tutorial - quantitative data acquisition methods.
Sampling can be more complex in qualitative research. Here is a short overview of sampling strategies you might use. More will be mentioned in Methodology tutorial - qualitative data acquisition methods
Type of selected cases | Usage |
---|---|
maximal variation | will give better scope to your result (but needs more complex models, you have to control more intervening variables, etc. !!) |
homogeneous | provides better focus and conclusions; will be "safer" since it will be easier to identify explaining variables and to test relations |
critical | exemplify a theory with a "natural" example |
according to theory, i.e. your research questions |
will give you better guarantees that you will be able to answer your questions .... |
extremes and deviant cases | test the boundaries of your explanations, seek new adventures |
intense | complete a quantitative study with an in-depth study |
Let's now look for the first time at what we mean by data. There are not only numbers, but also text, photos and videos ! However, we will not discuss details here, see the modules:
Below is a table with the principal forms of data collection (also called data acquisition).
Articulation | |||
---|---|---|---|
Situation |
non-verbal |
verbal | |
oral |
written | ||
informal |
participatory observation |
information interview. |
text analysis,log files analysis,etc. |
formal and |
systematic observation |
open interviews,semi-structured interviews,thinking aloud protocols,etc. |
open questionnaire,journals, vignettes, |
formal and structured |
experiment simulation |
standardized interview, |
standardized questionnaire,log files of structured user interactions, |
Let's introduce the reliability principle.
Reliability is the degree of measurement consistency for the same object:
Example: measure of boiling water
Sub-types of reliability (Kirk & Miller):
Reliability also can be understood in some wider sense. Empirical measures are used as or combined into indicators for variables. So "indicator" is just a fancy word for either simple or combined measures.
Anyhow, measures (indicators) can be problematic in various ways and you should look our for the "3 Cs":
Are your data complete ?
Are your data correct ?
Are your data comparable ?
Having good and reliable measures doesn't guarantee at all that your research is well done in the same that correctly written sentences will not guarantee that a novel is good reading.
The fundamental questions you have to ask are:
These issues are really tricky !
Validity (as well reliability) determine the formal quality of your research. More specifically, validity of your work (e.g. your theory or model) is determined by the validity of its analysis components.
In other words:
Validity is not the only quality factor of an empirical research, but it is the most important one. In the table below we show some elements that can be judged and how they are likely to be judged.
Elements of research | Judgements |
---|---|
Theories | usefulness (understanding, explanation, prediction) |
Models (“frameworks”) | usefulness & construction(relation between theory and data, plus coherence) |
Hypotheses and models | validity & logic construction (models) |
Methodology ("approach") | usefulness (to theory and conduct of empirical research) |
methods | good relation with theory, hypothesis, methodology etc. |
Data | good relation with hypothesis et models, plus reliability |
A good piece of work satisfies first of all an objective, but it also must be valid.
The same message told differently:
Let's now look a little bit more at causality and that is very much dependant on so-called "internal validity").
Correlations between data don't prove much by themselves ! In particular:
The best protection against such errors is theoretical and practical reasoning !
Example: A conclusion that could be made from some superficial data analysis could be the following statement: “We introduced ICT in our school and student satisfaction is much higher”. However, if you think hard you might want to test the alternative hypothesis that it’s maybe not ICT, but just a reorganization effect that had impact on various other variables such as teacher-student relationship, teacher investment, etc.)
If you observe correlations in your data and you are not sure, talk about association and not cause !
Even if can provide sound theoretical evidence for your conclusion, you have the duty to look a rival explanations !
Note: there are methods to test rival explanations (see modules on data-analysis)
Below we show some examples of simple hidden causalities.
Of course, there exist quantitative and qualitative methods to test for this ... but just start thinking first !
We end this short introduction into empirical research principles with a short list of advice:
Good analytical frameworks (e.g. instructional design theory or activity theory) will provide structure to your investigation and will allow you to focus on essential things.
You can’t answer your research question without a serious operationalization effort.
Identify major dimensions of concepts involved, use good analysis grids !
You can’t prove a hypothesis (you only can test, reinforce, corroborate, etc.).
Good informal knowledge of a domain will also help. Don’t hesitate to talk about your conclusions with a domain expert
Purely inductive reasoning approaches are difficult and dangerous ... unless you master an adapted (costly) methodology, e.g. "grounded theory"
Humans tend to look for facts that confirm their reasoning and ignore contradictory elements. It is your duty to test rival hypothesis (or at least to think about them) !
Show the others what they can learn from your piece of work , confront your work to other’s !
Different viewpoints (and measures) can consolidate or even refine results. E.g. imagine that you (a) led a quantitative study about teacher’s motivation to use ICT in school or (b) that you administered an evaluation survey form to measure user satisfaction of a piece of software. You then could run a cluster analysis through your data and identify major types of users, e.g. 6 types of teachers or 4 types of users).
Then you could do in-depth interviews with 2 representatives for each type and "dig" into their attitudes, subjective models, abilities, behaviors, etc. and confront these results with your quantitative study.
Finally let's recall from the Methodology tutorial - introduction that there exist very different research types. Each of these has certain advantages over the other, e.g.
But:
See Research methodology resources for web links and a general larger bibliography.