Flonase

Glenna A. Dowling, PhD, RN, FAAN
- Professor and Chair, Department of Physiological
- Nursing, University of California, San Francisco, CA
- Director, Institute on Aging, Research Center, San
- Francisco, CA, USA
However allergy kit for dogs 50 mcg flonase with amex, as illustrated allergy shots while traveling purchase online flonase, it is possible to display the discrete categories along the yaxis allergy medicine zyrtec dosage buy 50 mcg flonase fast delivery. The bar chart is an effective way of visually displaying the magnitude of each subcategory of a variable new allergy treatment 2012 purchase flonase once a day. In this case allergy shots upset stomach order flonase online pills, the subcategories of a variable are converted into percentages of the total population allergy shots high blood pressure safe 50 mcg flonase. Each bar, which totals 100, is sliced into portions relative to the percentage of each subcategory of the variable. A frequency polygon is drawn by joining the midpoint of each rectangle at a height commensurate with the frequency of that interval (Figure 16. One problem in constructing a frequency polygon is what to do with the two categories at either extreme. To bring the polygon line back to the x-axis, imagine that the two extreme categories have an interval similar to the rest and assume the frequency in these categories to be zero. From the midpoint of these intervals, you extend the polygon line to meet the x-axis at both ends. A frequency polygon can be drawn using either absolute or proportionate frequencies. The cumulative frequency polygon the cumulative frequency polygon or cumulative frequency curve (Figure 16. The main difference between a frequency polygon and a cumulative frequency polygon is that the former is drawn by joining the midpoints of the intervals, whereas the latter is drawn by joining the end points of the intervals because cumulative frequencies interpret data in relation to the upper limit of an interval. As a cumulative frequency distribution tells you the number of observations less than a given value and is usually based upon grouped data, to interpret a frequency distribution the upper limit needs to be taken. The stem-and-leaf diagram for a frequency distribution running into two digits is plotted by displaying digits 0 to 9 on the left of the y-axis, representing the tens of a frequency. Note that the stem-and-leaf display does not use grouped data but absolute frequencies. If the display is rotated 90 degrees in an anti-clockwise direction, it effectively becomes a histogram. With this technique some of the descriptive statistics relating to the frequency distribution, such as the mean, the mode and the median, can easily be ascertained; however, the procedure for their calculation is beyond the scope of this book. Stem-and-leaf displays are also possible for frequencies running into three and four digits (hundreds and thousands). There are 360 degrees in a circle, and so the full circle can be used to represent 100 per cent, or the total population. The circle or pie is divided into sections in accordance with the magnitude of each subcategory, and so each slice is in proportion to the size of each subcategory of a frequency distribution. Manually, pie charts are more difficult to draw than other types of graph because of the difficulty in measuring the degrees of the pie/circle. They can be drawn for both qualitative data and variables measured on a continuous scale but grouped into categories. If it relates to a period, the midpoint of each interval at a height commensurate with each frequency – as in the case of a frequency polygon – is marked as a dot. If the data pertains to exact time, a point is plotted at a height commensurate with the frequency. A line diagram is a useful way of visually conveying the changes when long-term trends in a phenomenon or situation need to be studied, or the changes in the subcategory of a variable are measured on an interval or a ratio scale (Figure 16. For example, a line diagram would be useful for illustrating trends in birth or death rates and changes in population size. The area chart For variables measured on an interval or a ratio scale, information about the subcategories of a variable can also be presented in the form of an area chart. This is plotted in the same way as a line diagram but with the area under each line shaded to highlight the total magnitude of the subcategory in relation to other subcategories. For a scattergram, both the variables must be measured either on interval or ratio scales and the data on both the variables needs to be available in absolute values for each observation – you cannot develop a scattergram for categorical variables. Data for both variables is taken in pairs and displayed as dots in relation to their values on both axes. Let us take the data on age and income for 10 respondents of a hypothetical study in Table 16. The relationship between age and income based upon hypothetical data is shown in Figure 16. Their use in certain situations is desirable and in some it is essential, however, you can conduct a perfectly valid study without using any statistical measure. There are many statistical measures ranging from very simple to extremely complicated. On one end of the spectrum you have simple descriptive measures such as mean, mode, median and, on the other; there are inferential statistical measures like analysis of variance, factorial analysis, multiple regressions. Because of its vastness, statistics is considered a separate academic discipline and before you are able to use these measures, you need to learn about them. Use of statistical measures is dependent upon the type of data collected, your knowledge of statistics, the purpose of communicating the findings, and the knowledge base in statistics of your readership. Before using statistical measures, make sure the data lends itself to the application of statistical measures, you have sufficient knowledge about them, and your readership can understand them. Summary Research findings in both quantitative and qualitative research are usually conveyed to readers through text. However, in quantitative studies, though text is still the dominant method of communicating research findings, it is often combined with other forms such as tables, graphs and statistical measures. These can make communication better, clearer, more effective and easier to understand. What you use should be determined by what you feel comfortable with, what you think will be easiest for readers to understand and what you think will enhance the understanding of your writing. Tables have the advantage of containing a great deal of information in a small space, while graphs make it easy for readers to absorb information at a glance. Usually, a table will have five parts: title, stub, column headings, body and supplementary notes or footnotes. Depending upon the number of variables about which information in a table is stored, there are three types of table: univariate (frequency), bivariate (cross-tabulation) and polyvariate. To interpret a table, simple arithmetic procedures such as percentages, cumulative frequencies or ratios can be used. You can also calculate simple descriptive statistical procedures such as the mean, the mode, the median, the chi-square test, the t-test and the coefficient of correlation. While there are many types of graphs, the common ones are: the histogram, the bar diagram, the stacked bar chart, the 100 per cent bar chart, the frequency polygon, the stem-and-leaf display, the pie chart, the line or trend diagram, the area chart and the scattergram. Which is used depends upon your purpose and the measurement scale used to measure the variable(s) being displayed. Some graphs are difficult to draw but several computer programs are capable of this. Identify two specific examples where you could use a table rather than just text to communicate findings and two examples where graphs would be better. Construct a hypothetical bivariate table, within the context of an area of interest. Writing a research report the last step in the research process is writing the research report. Each step of the process is important for a valid study, as negligence at any stage will affect the quality of not just that part but the whole study. In a way, this last step is the most crucial as it is through the report that the findings of the study and their implications are communicated to your supervisor and readers. Most people will not be aware of the amount and quality of work that has gone into your study. While much hard work and care may have been put into every stage of the research, all readers see is the report. As Burns writes, ‘extremely valuable and interesting practical work may be spoiled at the last minute by a student who is not able to communicate the results easily’ (1997: 229). In addition to your understanding of research methodology, the quality of the report depends upon such things as your written communication skills and clarity of thought, your ability to express thoughts in a logical and sequential manner, and your knowledge base of the subject area. Another important determinant is your experience in research writing: the more experience you acquire, the more effective you will become in writing a research report. The use of statistical procedures will reinforce the validity of your conclusions and arguments as they enable you to establish if an observed association is due to chance or otherwise. The use of graphs to present the findings, though not essential, will make the information more easily understood by readers. As stated in the previous chapter, whether or not graphs are used depends upon the purpose for which the findings are to be used. The main difference between research and other writing is in the degree of control, rigorousness and caution required. Research writing is controlled in the sense that you need to be extremely careful about what you write, the words you choose, the way ideas are expressed, and the validity and verifiability of the bases for the conclusions you draw. What most distinguishes research writing from other writing is the high degree of intellectual rigour required. Research writing must be absolutely accurate, clear, free of ambiguity, logical and concise. Your writing should not be based upon assumptions about knowledge of your readers about the study. Bear in mind that you must be able to defend whatever you write should anyone challenge it. Even the best researchers make a number of drafts before writing up their final one, so be prepared to undertake this task. The way findings are communicated differs in quantitative and qualitative research. As mentioned earlier, in qualitative research the findings are mostly communicated in descriptive or narrative format written around the major themes, events or discourses that emerge from your findings. The main purpose is to describe the variation in a phenomenon, situation, event or episode without making an attempt to quantify the variation. One of the ways of writing a qualitative report is described in Chapter 15 as a part of the content analysis process. On the other hand, the writing in quantitative research, in addition to being descriptive, also includes its quantification. Depending upon the purpose of the study, statistical measures and tests can also become a part of the research writing to support the findings. Developing an outline Before you start writing your report, it is good practice to develop an outline (‘chapterisation’). This means deciding how you are going to divide your report into different chapters and planning what will be written in each one. In developing chapterisation, the subobjectives of your study or the major significant themes that emerged from content analysis can provide immense guidance. Develop the chapters around the significant subobjectives or themes of your study. Depending upon the importance of a theme or a subobjective, either devote a complete chapter to it or combine it with related themes to form one chapter. The title of each chapter should be descriptive of the main theme, communicate its main thrust and be clear and concise. The following approach is applicable to both qualitative and quantitative types of research but keep in mind that it is merely suggestive and may be of help if you have no idea where to start. Feel free to change the suggested format in any way you like or if you prefer a different one, follow that. The first chapter of your report, possibly entitled ‘Introduction’, should be a general introduction to the study, covering most of your project proposal and pointing out the deviations, if any, from the original plan. This chapter covers all the preparatory tasks undertaken prior to conducting the study, such as the literature review, the theoretical framework, the objectives of the study, study design, the sampling strategy and the measurement procedures. To illustrate this, two examples are provided below for projects referred to previously in this book: the study on foster-care payments and the Family Engagement model. The first chapters of these reports could be written around the subheadings below. Keeping in view the purpose for which Family Engagement evaluation was commissioned, the report was divided into three parts: the Introduction, the perceived model, and conclusions and recommendation. Attitudes towards foster-care payments: suggested contents of chapter 1 Chapter 1 Introduction Introduction the development of foster care Foster care in Australia Foster care in Western Australia the Department of Community Services the out-of-home and community care programme Current trends in foster-care placement in Western Australia Becoming a foster carer Foster-care subsidies Issues regarding foster-care payment Rationale for the study Objectives of the study Study design Sampling Measurement procedure Problems and limitations Working definitions the Family Engagement – A service delivery model: suggested contents of chapter 1 Part One: Introduction Background: the origin of the Family Engagement idea Historical perspective the perceived model Conceptual framework Philosophical perspective underpinning the model Indented outcomes Objectives of the evaluation Evaluation methodology (Note: In this section, the conceptual framework of the model, its philosophical basis, perceived outcomes as identified by the person(s) responsible for initiating the idea, and what was available in the literature, were included. Here, the relevant social, economic and demographic characteristics of the study population should be described. It provides readers with some background information about the population from which you collected the information so they can relate the findings to the type of population studied. It helps to identify the variance within a group; for example, you may want to examine how the level of satisfaction of the consumers of a service changes with their age, gender or education. The second chapter in a quantitative research report, therefore, could be entitled ‘Socioeconomic– demographic characteristics of the study population’ or just ‘The study population’. This chapter could be written around the subheadings below which are illustrated by taking the example of the foster-care payment study. As qualitative studies are mostly based upon a limited number of in-depth interviews or observations, you may find it very difficult to write about the study population. The title and contents of subsequent chapters depend upon what you have attempted to describe, explore, examine, establish or prove in your study. As indicated earlier, the title of each chapter should reflect the main thrust of its contents. These subsections should be developed around the different aspects of the theme being discussed in the chapter. If you plan to correlate the information obtained from one variable with another, specify the variables. In deciding this, keep in mind the linkage and logical progression between the sections. This does not mean that the proposed outline cannot be changed when writing the report – it is possible for it to be significantly changed.
Maxmincon principle of variance: When studying causality between two variables there are three sets of variable that impact upon the dependent variable allergy medicine eczema discount flonase 50 mcg line. Since your aim as a researcher is to determine the change that can be attributed to the independent variable allergy medicine you can take with zyrtec order cheap flonase on line, you need to design your study to ensure that the independent variable has the maximum opportunity to have its full impact on the dependent variable allergy forecast washington dc buy discount flonase 50 mcg online, while the effects that are attributed to extraneous and chance variables are minimised allergy testing taunton flonase 50mcg. Setting up a study to achieve the above is known as adhering to the maxmincon principle of variance allergy test quiz discount 50 mcg flonase free shipping. Narratives: the narrative technique of gathering information has even less structure than the focus group allergy medicine for 7 year old buy discount flonase 50mcg on-line. Narratives have almost no predetermined contents except that the researcher seeks to hear the personal experience of a person with an incident or happening in his/her life. Essentially, the person tells his/her story about an incident or situation and you, as the researcher, listen passively, occasionally encouraging the respondent. Nominal scale: the nominal scale is one of the ways of measuring a variable in the social sciences. It enables the classification of individuals, objects or responses based on a common/shared property or characteristic. These people, objects or responses are divided into a number of subgroups in such a way that each member of the subgroup has the common characteristic. Non-experimental studies: There are times when, in studying causality, a researcher observes an outcome and wishes to investigate its causation. In a non-experimental study you neither introduce nor control/manipulate the cause variable. Non-participant observation: When you, as a researcher, do not get involved in the activities of the group but remain a passive observer, watching and listening to its activities and interactions and drawing conclusions from them, this is called non-participant observation. Non-probability sampling designs do not follow the theory of probability in the selection of elements from the sampling population. Non-probability sampling designs are commonly used in both quantitative and qualitative research. Null hypothesis: When you construct a hypothesis stipulating that there is no difference between two situations, groups, outcomes, or the prevalence of a condition or phenomenon, this is called a null hypothesis and is usually written as H0. Objective-oriented evaluation: this is when an evaluation is designed to ascertain whether or not a programme or a service is achieving its objectives or goals. It is a purposeful, systematic and selective way of watching and listening to an interaction or phenomenon as it takes place. Though dominantly used in qualitative research, it is also used in quantitative research. Open-ended questions: In an open-ended question the possible responses are not given. In the case of a questionnaire, a respondent writes down the answers in his/her words, whereas in the case of an interview schedule the investigator records the answers either verbatim or in a summary describing a respondent’s answer. Operational definition: When you define concepts used by you either in your research problem or in the study population in a measurable form, they are called working or operational definitions. It is important for you to understand that the working definitions that you develop are only for the purpose of your study. Oral history is more a method of data collection than a study design; however, in qualitative research, it has become an approach to study a historical event or episode that took place in the past or for gaining information about a culture, custom or story that has been passed on from generation to generation. Oral histories, like narratives, involve the use of both passive and active listening. Oral histories, however, are more commonly used for learning about cultural, social or historical events whereas narratives are more about a person’s own experiences. Ordinal scale: An ordinal scale has all the properties of a nominal scale plus one of its own. Besides categorising individuals, objects, responses or a property into subgroups on the basis of a common characteristic, it ranks the subgroups in a certain order. Outcome evaluation: the focus of an outcome evaluation is to find out the effects, impacts, changes or outcomes that the programme has produced in the target population. Panel studies are prospective in nature and are designed to collect information from the same respondents over a period of time. The selected group of individuals becomes a panel that provides the required information. In a panel study the period of data collection can range from once only to repeated data collections over a long period. Participant observation is when you, as a researcher, participate in the activities of the group being observed in the same manner as its members, with or without their knowing that they are being observed. Participant observation is principally used in qualitative research and is usually done by developing a close interaction with members of a group or ‘living’ in with the situation which is being studied. Participatory research: Both participatory research and collaborative enquiry are not study designs per se but signify a philosophical perspective that advocates an active involvement of research participants in the research process. Participatory research is based upon the principle of minimising the ‘gap’ between the researcher and the research participants. The most important feature is the involvement and participation of the community or research participants in the research process to make the research findings more relevant to their needs. As there are 360 degrees in a circle, the full circle can be used to represent 100 per cent or the total population. The circle or pie is divided into sections in accordance with the magnitude of each subcategory comprising the total population. Hence each slice of the pie is in proportion to the size of each subcategory of a frequency distribution. Pilot study: See Feasibility study Placebo effect: A patient’s belief that s/he is receiving the treatment plays an important role in his/her recovery even though the treatment is fake or ineffective. The change occurs because a patient believes that s/he is receiving the treatment. This psychological effect that helps a patient to recover is known as the placebo effect. Placebo study: A study that attempts to determine the extent of a placebo effect is called a placebo study. A placebo study is based upon a comparative study design that involves two or more groups, depending on whether or not you want to have a control group to isolate the impact of extraneous variables or other treatment modalities to determine their relative effectiveness. Polytomous variable: When a variable can be divided into more than two categories, for example religion (Christian, Muslim, Hindu), political parties (Labor, Liberal, Democrat), and attitudes (strongly favourable, favourable, uncertain, unfavourable, strongly unfavourable), it is called a polytomous variable. Population mean: From what you find out from your sample (sample statistics) you make an estimate of the prevalence of these characteristics for the total study population. The estimates about the total study population made from sample statistics are called population parameters or the population mean. Predictive validity is judged by the degree to which an instrument can correctly forecast an outcome: the higher the correctness in the forecasts, the higher the predictive validity of the instrument. Pre-test: In quantitative research, pre-testing is a practice whereby you test something that you developed before its actual use to ascertain the likely problems with it. The pre-test of a research instrument entails a critical examination of each question as to its clarity, understanding, wording and meaning as understood by potential respondents with a view to removing possible problems with the question. It ensures that a respondent’s understanding of each question is in accordance with your intentions. Pre-testing a code book entails actually coding a few questionnaires/interview schedules to identify any problems with the code book before coding the data. Primary data: Information collected for the specific purpose of a study either by the researcher or by someone else is called primary data. Primary sources: Sources that provide primary data such as interviews, observations, and questionnaires are called primary sources. Probability sampling: When selecting a sample, if you adhere to the theory of probability, that is you select the sample in such a way that each element in the study population has an equal and independent chance of selection in the sample, the process is called probability sampling. Process evaluation: the main emphasis of process evaluation is on evaluating the manner in which a service or programme is being delivered in order to identify ways of enhancing the efficiency of the delivery system. Programme planning evaluation: Before starting a large-scale programme it is desirable to investigate the extent and nature of the problem for which the programme is being developed. When an evaluation is undertaken with the purpose of investigating the nature and extent of the problem itself, it is called programme planning evaluation. Proportionate stratified sampling: In proportionate stratified sampling, the number of elements selected in the sample from each stratum is in relation to its proportion in the total population. Prospective studies refer to the likely prevalence of a phenomenon, situation, problem, attitude or outcome in the future. Such studies attempt to establish the outcome of an event or what is likely to happen. Experiments are usually classified as prospective studies because the researcher must wait for an intervention to register its effect on the study population. Pure research is concerned with the development, examination, verification and refinement of research methods, procedures, techniques and tools that form the body of research methodology. Purposive sampling: See Judgemental sampling Qualitative research: In the social sciences there are two broad approaches to enquiry: qualitative and quantitative or unstructured and structured approaches. Qualitative research is based upon the philosophy of empiricism, follows an unstructured, flexible and open approach to enquiry, aims to describe than measure, believes in in-depth understanding and small samples, and explores perceptions and feelings than facts and figures. Quantitative research is a second approach to enquiry in the social sciences that is rooted in rationalism, follows a structured, rigid, predetermined methodology, believes in having a narrow focus, emphasises greater sample size, aims to quantify the variation in a phenomenon, and tries to make generalisations to the total population. Quasi-experiments: Studies which have the attributes of both experimental and non-experimental studies are called quasior semi-experiments. Questionnaire: A questionnaire is a written list of questions, the answers to which are recorded by respondents. In a questionnaire respondents read the questions, interpret what is expected and then write down the answers. The only difference between an interview schedule and a questionnaire is that in the former it is the interviewer who asks the questions (and, if necessary, explains them) and records the respondent’s replies on an interview schedule, while in the latter replies are recorded by the respondents themselves. Quota sampling: the main consideration directing quota sampling is the researcher’s ease of access to the sample population. In addition to convenience, a researcher is guided by some visible characteristic of interest, such as gender or race, of the study population. The sample is selected from a location convenient to you as a researcher, and whenever a person with this visible relevant characteristic is seen, that person is asked to participate in the study. Random design: In a random design, the study population groups as well as the experimental treatments are not predetermined but randomly assigned to become control or experimental groups. Random assignment in experiments means that any individual or unit of the study population has an equal and independent chance of becoming a part of the experimental or control group or, in the case of multiple treatment modalities, any treatment has an equal and independent chance of being assigned to any of the population groups. It is important to note that the concept of randomisation can be applied to any of the experimental designs. Random sampling: For a design to be called random or probability sampling, it is imperative that each element in the study population has an equal and independent chance of selection in the sample. Equal implies that the probability of selection of each element in the study population is the same. The concept of independence means that the choice of one element is not dependent upon the choice of another element in the sampling. Random variable: When collecting information from respondents, there are times when the mood of a respondent or the wording of a question can affect the way a respondent replies. Randomisation: In experimental and comparative studies, you often need to study two or more groups of people. In forming these groups it is important that they are comparable with respect to the dependent variable and other variables that affect it so that the effects of independent and extraneous variables are uniform across groups. Randomisation is a process that ensures that each and every person in a group is given an equal and independent chance of being in any of the groups, thereby making groups comparable. Ratio scale: A ratio scale has all the properties of nominal, ordinal and interval scales plus its own property; the zero point of a ratio scale is fixed, which means it has a fixed starting point. As the difference between the intervals is always measured from a zero point, arithmetical operations can be performed on the scores. Reactive effect: Sometimes the way a question is worded informs respondents of the existence or prevalence of something that the study is trying to find out about as an outcome of an intervention. This effect is known as reactive effect of the instrument Recall error: Error that can be introduced in a response because of a respondent’s inability to recall correctly its various aspects when replying. Regression effect: Sometimes people who place themselves on the extreme positions of a measurement scale at the pre-test stage may, for a number of reasons, shift towards the mean at the post-test stage. Therefore, the mere expression of the attitude in response to a questionnaire or interview has caused them to think about and alter their attitude towards the mean at the time of the post-test. Reflective journal log: Basically this is a method of data collection in qualitative research that entails keeping a log of your thoughts as a researcher whenever you notice anything, talk to someone, participate in an activity or observe something that helps you understand or add to whatever you are trying to find out about. Reflexive control design: In experimental studies, to overcome the problem of comparability in different groups, sometimes researchers study only one population and treat data collected during the non-intervention period as representing a control group, and information collected after the introduction of the intervention as if it pertained to an experimental group. It is the periods of non-intervention and intervention that constitute control and experimental groups. Reliability is the ability of a research instrument to provide similar results when used repeatedly under similar conditions. Reliability indicates accuracy, stability and predictability of a research instrument: the higher the reliability, the higher the accuracy; or the higher the accuracy of an instrument, the higher its reliability. Replicated cross-sectional design: this study design is based upon the assumption that participants at different stages of a programme are similar in terms of their socioeconomic–demographic characteristics and the problem for which they are seeking intervention. Assessment of the effectiveness of an intervention is done by taking a sample of clients who are at different stages of the intervention. The difference in the dependent variable among clients at the intake and termination stage is considered to be the impact of the intervention. Research is one of the ways of finding answers to your professional and practice questions. However, it is characterised by the use of tested procedures and methods and an unbiased and objective attitude in the process of exploration. Research design: A research design is a procedural plan that is adopted by the researcher to answer questions validly, objectively, accurately and economically. A research design therefore answers questions that would determine the path you are proposing to take for your research journey. Research objectives are specific statements of goals that you set out to be achieved at the end of your research journey.
Buy flonase 50 mcg visa. How to Recognize Pollen Allergy Symptoms -- From the makers of ZYRTEC®.
Evaluating a programme: Example One For the first evaluation allergy shots causing joint swelling discount 50 mcg flonase with amex, after having initial discussions with various stakeholders allergy diagnosis purchase 50 mcg flonase free shipping, it was discovered that understanding of the principle of ‘community responsiveness’ was extremely vague and varied among different people allergy vs adverse drug reaction order genuine flonase line. Also allergy shots dizziness discount 50mcg flonase visa, there were neither any instructions about how to achieve community responsiveness nor any training programme for the purpose allergy testing baltimore generic flonase 50mcg free shipping. A few people allergy forecast delaware cheap flonase 50 mcg online, responsible for ensuring the implementation of the principle, had no idea about its implementation. Our first question was: ‘Can we evaluate something about which those responsible for implementation are not clear, and for which there is no specific strategy in place? We discussed with the sponsors of the evaluation what questions they had in mind when asking us for the evaluation. On the basis of our discussion with them and our understanding of their reasons for requesting the evaluation, we proposed that the evaluation be carried out in two phases. For the first phase, the aim of the evaluation should be to define ‘community responsiveness’, identify/develop/explore operational strategies to achieve it, and identify the indicators of its success or otherwise. During the second phase, an evaluation to measure the impact of implementation of the community responsiveness strategies was proposed. We developed the following objectives in consultation with the various stakeholders. Evaluation of the principle of community responsiveness in health Phase One Main objective: To develop a model for implementing the principle of community responsiveness in the delivery of health care in … (name of the state). To find out how the principle of community responsiveness is understood by health planners, administrators, managers, service providers and consumers, and to develop an operational definition of the term for the department. To identify, with the participation of stakeholders, strategies to implement the concept of community responsiveness in the delivery of health services. To develop a set of indicators to evaluate the effectiveness of the strategies used to achieve community responsiveness. To identify appropriate methodologies that are acceptable to stakeholders for measuring effectiveness indicators. Phase Two Main objective: To evaluate the effectiveness of the strategies used to achieve the principle of community responsiveness in the delivery of health services. To determine the impact of community responsiveness strategies on community participation in decision making about health issues affecting the community. To find out the opinions of the various stakeholders on the degree to which the provision of community responsiveness in the delivery of health services has been/is being observed. To find out the extent of involvement of the community in decision making in issues concerning the community and its attitude towards involvement. In this case the service delivery model was well developed and the evaluation brief was clear in terms of its expectations; that is, the objective was to evaluate the model’s effectiveness. Before starting the evaluation, the following objectives were developed in consultation with the steering committee, which had representatives from all stakeholder groups. Remember, it is important that your objectives be unambiguous, clear and specific, and that they are written using verbs that express your operational intentions. The … Model Main objective: To evaluate the effectiveness of the … (name of the model) developed by … (name of the office). To identify the strengths and weaknesses of the model as perceived by various stakeholders. To find out the attitudes of consumers, service providers and managers, and relevant community agencies towards the model. To determine the extent of reduction, if any, in the number of children in the care of the department since the introduction of the model. To determine the impact of the model on the number of Child Concern Reports and Child Maltreatment Allegations. To assess the ability of the model to build the capacity of consumers and service providers to deal with problems in the area of child protection. To estimate the cost of delivering services in accordance with the model to a family. Step 3: Converting concepts into indicators into variables In evaluation, as well as in other research studies, often we use concepts to describe our intentions. For example, we say that we are seeking to evaluate outcomes, effectiveness, impact or satisfaction. The meaning ascribed to such words may be clear to you but may differ markedly from the understanding of others. They need operational definitions in terms of their measurement in order to develop a uniform understanding. When you use concepts, the next problem you need to deal with is the development of a ‘meaning’ for each concept that describes them appropriately for the contexts in which they are being applied. The meaning of a concept in a specific situation is arrived at by developing indicators. To develop indicators, you must answer questions such as: ‘What does this concept mean? Indicators are specific, observable, measurable characteristics or changes that can be attributed to the programme or intervention. A critical challenge to an evaluator in outcome measurement is identifying and deciding what indicators to use in order to assess how well the programme being evaluated has done regarding an outcome. Remember that not all changes or impacts of a programme may be reflected by one indicator. In many situations you need to have multiple indicators to make an assessment of the success or failure of a programme. For example, an indicator such as the number of programme users is easy to measure, whereas a programme’s impact on self-esteem is more difficult to measure. In order to assess the impact of an intervention, different types of effectiveness indicators can be used. These indicators may be either qualitative or quantitative, and their measurement may range from subjective–descriptive impressions to objective–measurable–discrete changes. If you are inclined more towards qualitative studies, you may use in-depth interviewing, observation or focus groups to establish whether or not there have been changes in perceptions, attitudes or behaviour among the recipients of a programme with respect to these indicators. In this case, changes are as perceived by your respondents: there is, as such, no measurement involved. On the other hand, if you prefer a quantitative approach, you may use various methods to measure change in the indicators using interval or ratio scales. In all the designs that we have discussed above in outcome evaluation, you may use qualitative or quantitative indicators to measure outcomes. Suppose you are working in a department concerned with protection of children and are testing a new model of service delivery. Let us further assume that your model is to achieve greater participation and involvement of children, their families and non-statutory organisations working in the community in decision making about children. Your assumption is that with their involvement and participation in developing the proposed intervention strategies, higher compliance will result, which, in turn, will result in the achievement of the desired goals. As part of your evaluation of the model, you may choose a number of indicators such as the impact on the: number of children under the care of the department/agency; number of children returned to the family or the community for care; number of reported cases of ‘Child Maltreatment Allegations’; number of reported cases of ‘Child Concern Reports’; extent of involvement of the family and community agencies in the decision-making process about a child. You may also choose indicators such as the attitude of: children, where appropriate, and family members towards their involvement in the decisionmaking process; service providers and service managers towards the usefulness of the model; non-statutory organisations towards their participation in the decision-making process; various stakeholders towards the ability of the model to build the capacity of consumers of the service for self-management; family members towards their involvement in the decision-making process. The scales used in the measurement determine whether an indicator will be considered as ‘soft’ or ‘hard’. Attitude towards an issue can be measured using well-advanced attitudinal scales or by simply asking a respondent to give his/her opinion. The first method will yield a hard indicator while the second will provide a soft one. Similarly, a change in the number of children, if asked as an opinion question, will be treated as a soft indicator. Once you have understood the logic behind this operationalisation, you will find it easier to apply in other similar situations. Step 4: Developing evaluation methodology As with a non-evaluative study, you need to identify the design that best suits the objectives of your evaluation, keeping in mind the resources at your disposal. In most evaluation studies the emphasis is on ‘constructing’ a comparative picture, before and after the introduction of an intervention, in relation to the indicators you have selected. On the basis of your knowledge about study designs and the designs discussed in this chapter, you propose one that is most suitable for your situation. Also, as part of evaluation methodology, do not forget to consider other aspects of the process such as: From whom will you collect the required information? How will you seek the informed consent of your respondents for their participation in the evaluation? What is the relevance of the evaluation for your respondents or others in a similar situation? Step 5: Collecting data As in a research study, data collection is the most important and time-consuming phase. As you know, the quality of evaluation findings is entirely dependent upon the data collected. Whether quantitative or qualitative methods are used for data collection, it is essential to ensure that quality is maintained in the process. You can have a highly structured evaluation, placing great emphasis on indicators and their measurement, or you can opt for an unstructured and flexible enquiry: as mentioned earlier, the decision is dependent upon the purpose of your evaluation. For exploratory purposes, flexibility and a lack of structure are an asset, whereas, if the purpose is to formulate a policy, measure the impact of an intervention or to work out the cost of an intervention, a greater structure and standardisation and less flexibility are important. Step 6: Analysing data As with research in general, the way you can analyse the data depends upon the way it was collected and the purpose for which you are going to use the findings. For policy decisions and decisions about programme termination or continuation, you need to ascertain the magnitude of change, based on a reasonable sample size. However, if you are evaluating a process or procedure, you can use an interpretive frame of analysis. Step 7: Writing an evaluation report As previously stated, the quality of your work and the impact of your findings are greatly dependent upon how well you communicate them to your readers. In the author’s opinion you should communicate your findings under headings that reflect the objectives of your evaluation. It is also suggested that the findings be accompanied by recommendations pertaining to them. Your report should also have an executive summary of your findings and recommendations. Step 8: Sharing findings with stakeholders A very important aspect of any evaluation is sharing the findings with the various groups of stakeholders. It is a good idea to convene a group comprising all stakeholders to communicate what your evaluation has found. It is of utmost importance that you adhere to ethical principles and the professional code of conduct. As you have seen, the process of a research study and that of an evaluation is almost the same. The only difference is the use of certain models in the measurement of the effectiveness of an intervention. It is therefore important for you to know about research methodology before undertaking an evaluation. Involving stakeholders in evaluation Most evaluations have a number of stakeholders, ranging from consumers to experts in the area, including service providers and managers. It is important that all categories of stakeholder be involved at all stages of an evaluation. Failure to involve any group may hinder success in completion of the evaluation and seriously affect confidence in your findings. It is therefore important that you identify all stakeholders and seek their involvement and participation in the evaluation. This ensures that they feel a part of the evaluation process, which, in turn, markedly enhances the probability of their accepting the findings. The following steps outline a process for involving stakeholders in an evaluation study. First of all, talk with managers, planners, programme administrators, service providers and the consumers of the programme either individually or collectively, and Step identify who they think are the direct and indirect stakeholders. Having collected this information, 1 share it with all groups of stakeholders to see if anyone has been left out. Prepare a list of all stakeholders making sure it is acceptable to all significant ones. In order to develop a common perspective with respect to various aspects of the evaluation, it is important Step that different categories of stakeholder be actively involved in the whole process of evaluation from the identification of their 2 concerns to the sharing of its findings. In particular, it is important to involve them in developing a framework for evaluation, selecting the evaluation indicators, and developing procedures and tools for their measurement. Different stakeholders may have different understandings of the word ‘evaluation’. Some may have a very definite opinion about it and how it should be carried out while Step others may not have any conception. Different stakeholders may also have different opinions about the relevance of a particular 3 piece of information for answering an evaluation question. To make evaluation meaningful to the majority of stakeholders, it is important that their perspectives and understandings of evaluation be understood and that a common perspective on the evaluation be arrived at during the planning stage. As an evaluator, if you find that stakeholders have strong opinions and there is a conflict of interest Step among them with respect to any aspect of the evaluation, it is extremely important to resolve it. However, you have to be very 4 careful in resolving differences and must not give the impression that you are favouring any particular subgroup. Identify, from each group of stakeholders, the 5 information they think is important to meet their needs and the objectives of the evaluation. For routine consultation, the sharing of ideas and day-to-day decision making, it is important that Step you ask the stakeholders to elect a steering committee with whom you, as the evaluator, can consult and interact. In addition to 6 providing you with a forum for consultation and guidance, such a committee gives stakeholders a continuous sense of involvement in the evaluation. If for some reason you cannot be ethical, do not undertake the evaluation, as you will end up doing harm to others, and that is unethical. Although, as a good evaluator, you may have involved all the stakeholders in the planning and conduct of the evaluation, it is possible that sometimes, when findings are not in someone’s interest, a stakeholder will challenge you.
In a well-designed adaptive trial allergy treatment medications safe flonase 50 mcg, that flexibility can result in lower drug development costs allergy shots vancouver quality flonase 50 mcg, reduced time to market and improved participant safety allergy medicine zantrex order discount flonase online. Cost reduction is achieved by identifying successful trials sooner allergy symptoms night sweats buy flonase with mastercard, dropping unnecessary treatment groups or determining effective dose regimens Chapter 2 allergy notes cheap generic flonase canada. Participant safety is improved because adaptive trials tend to reduce exposure to unsuccessful treatment groups and increase access to effective treatment groups allergy treatment honey buy flonase 50mcg line. Adaptive trial design requires modern data collection technologies to provide the research team with real-time information, and enables them to plan and quickly implement seamless changes in response to that information. Key enabling technologies for adaptive trial design are, for instance, real-time electronic data capture over the Internet to a central database. The general impression is that utilising adaptive clinical trial design will become more and more popular. The adaptive trial design is still in its infancy and may become generally accepted in the future. A control group is chosen from the same population as the test group and treated in a defined way as part of the same trial studying the test treatment. Test and control groups should be similar at the initiation of the trial on variables that could influence outcome, except for the trial treatment. That choice affects the inferences Test drug Active / Placebo that can be drawn from the trial, Standard the ethical acceptability of the trial, the degree to which bias in Placebo conducting and analyzing the trial can be minimized, the types of participants that can be recruited and the pace of recruitment, the Placebo+add on kind of endpoints that can be studied, the public and scientific credibility of the results, the acceptability of the results by No treatment regulatory authorities, and many other features of the trial, its conduct, and its interpretation. This design is needed and suitable only when it is difficult or impossible to use blinding. Such trials are usually double-blind, but this is not always possible as blinding to the two treatments may be impossible. Active control trials can have two objectives with respect to showing efficacy: to show efficacy of the test treatment by showing it is as good as the standard treatment, or by showing superiority of the test treatment to the known effective treatment. An externally controlled trial compares a group of participants receiving the test treatment with a group of participants external to the trial. The external control can be a group of participants treated at an earlier time (historical control) or a group treated during the same time period but in another setting. Trials can, for instance, use several doses of a test drug and several doses of an active control, with or without placebo. Choice of participants – trial sample – should mirror the total participant population for which the drug may eventually be indicated. However, this is not the case for early phase trials, when choice of participants is influenced by research questions such as human pharmacology. However, for confirmatory late phase trials, the participants should closely mirror the target patient population. However, how much the trial participants represent future users may be influenced by the medical practices and level of standard care of a particular investigator, clinic or geographic region. The influence of such factors should be reduced and discussed during interpretation of the results. Placebo Treatment the Declaration of Helsinki states: The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best current proven intervention, except in the following circumstances: the use of placebo, or no treatment, is acceptable in studies where no current proven intervention exists; or where for compelling and scientifically sound methodological reasons the use of placebo is necessary to determine the efficacy or safety of an intervention and the participants who receive placebo or no treatment will not be subject to any risk of serious or irreversible harm. However, using a placebo control may pose ethical concerns if an effective treatment is available. When the available treatment is known to prevent serious harm, such as death or irreversible morbidity, it is most often inappropriate to use placebo control. An exception is, for instance, when the standard therapy has such severe toxicity that participants will not accept it. When a placebo-controlled trial is not associated with serious harm, it is by and large ethically sound to use a placebo-controlled trial design, even with some discomfort, assuming that the participants are fully informed about available therapies and the consequences of delaying treatment. Placebo or no-treatment control does not mean a participant does not receive treatment at all. The best supportive available care will normally be provided, plus the same clinical follow-up as the active treatment group. Placebo-controlled trials can also be conducted as add-on trials where all participants receive a standard therapy. Placebo-controlled trials measure the total mediated effect of treatment while active control trials, or dose-comparison trials, measure the effect relative to another treatment. They also make Test drug Active / Placebo it possible to distinguish between Standard adverse events caused by both the Mean treatment effect – drug and underlying disease. It treatment difference should also be noted that they provide little useful information about the comparative effectiveness of standard treatment. However, when standard treatment is used, the mean duration to Placebo and sample size: the sample size of a trial is influenced by the type of comparison. Here we illustrate that a placebo treatment group design will symptom recovery is 4. A require 138 study participants in total, compared with 548 when utilising a drug company has developed a standard treatment control group. Theoretically, the new test article is more effective, being able to reduce the average number of days to recovery to 4. If the comparison is against standard treatment, to show a statistical difference between the two treatment groups, we need to recruit 274 participants for each (the calculation is based on certain assumptions not described in detail). But only 69 participants are needed per group if no treatment – placebo – is used as a comparison. In this scenario, 410 extra participants are put at risk of harm when standard treatment is used as a comparison. Yet in fact we do not know whether the test article has any effect at all or is safe when given to participants. Efficacy, safety and quality of life are the most common and widely accepted indicators: Efficacy is simply an estimate of how effective the test medicinal product is in eliminating/reducing the symptoms or long-term endpoints of the condition under trial. Efficacy measures can be of many kinds, such as blood pressure, tumour size, fever, liver function test or body mass index. All negative adverse reactions or events that a trial participant experiences during the conduct of the trial should be documented. The investigators monitor for adverse reactions or events to determine safety during a clinical trial. Adverse events can be mild, such as local short-term reactions and headaches, or serious such as stroke and death. QoL includes physical, mental and social well-being, and not just the absence of disease or illness. There are broad QoL measurements that are not very specific for the disease or condition – general well-being – and there are disease-specific questionnaires that are more sensitive to treatment and disease influences. All questionnaires must be validated properly before they are used as a valid trial endpoint. Trial participants are usually assessed at a minimum of three different time points (see illustration): Screening Baseline End Screening: Trial participants are commonly examined before a trial starts to assess their health status in relation to certain trial inclusion/exclusion Extra study visits criteria. Baseline is the time point when a clinical trial starts, just before any treatment begins. Features of Clinical Trials 39 End of Trial: the trial endpoint measure is repeated at the end of the trial. Often the research team compares the baseline endpoint values to those made at the end of the trial to see how well the treatment worked. A trial endpoint is usually estimated as the difference between the end value and baseline value of the endpoint measure; in some trials, follow-up continues for the participants after the end-of-treatment visit. The participants will visit the study site several times during the course of a trial to collect trial medication or other medications, for instance, or to be given a physical examination and follow-up test(s) (see illustration). Adverse events – side-effects – and test article dispensing/compliance information is often accumulated continuously throughout the trial, by means of laboratory tests, for example, or home log-books. Such accumulated information is commonly used in the final safety statistical analysis. Primary and secondary endpoints (see below) are commonly recorded or assessed at each or some of the extra site visits as well. One reason for this is that if a participant drops out during the active trial period, the data can still be used for some of the statistical endpoint analyses. All details about trial endpoints – how they are assessed, at what time points, how they are analysed, etc – must be clearly spelled out in the clinical trial protocol. Primary and Secondary Outcome/Endpoint the primary endpoint of a trial represents the variable providing the most relevant and convincing evidence related to the prime objective of the trial. Safety may occasionally serve as the primary variable, but safety is always an important consideration, even if it serves as a secondary set of endpoints. Selecting the primary variable is one of the most important tasks when designing a clinical trial, since it is the gateway for acceptance of the results. We must produce evidence that the primary variable represents a valid and reliable measure reflecting clinically relevant and important treatment benefits. It should be well defined in the protocol, along with the rationale for why it was selected, when it will be measured during the course of the trial and how the statistical analysis will be carried out. Redefining the primary endpoint after the trial has been completed is unacceptable since it violates the trial design and may be unethical, especially when the original, real primary endpoint was statistically insignificant between the treatment groups. Secondary endpoints can be supportive measurements of the primary objective or measurements of effects related to other secondary objectives. These should also be pre-defined in the protocol, explaining their importance and role in interpreting trial results. Both primary and secondary endpoints are clearly defined, both with an efficacy estimate as the primary endpoint and safety as the secondary endpoint. Zero change Primary endpoint: Compare the overall survival between the two treatment groups. Surrogate or Clinical Outcome/Endpoint A trial endpoint of a clinical trial should fulfill three criteria: (1) be measurable and interpretable, (2) sensitive to the objective of the trial, and (3) clinically relevant. Surrogate endpoints are used because For instance, one diabetes drug that has they can be measured earlier, are been approved based on surrogate convenient or less invasive, can be markers has in fact been linked with an measured more frequently and can increased risk of myocardial infarction. Additional advantages are that their utilisation can very likely reduce the sample size of clinical trials, shorten their duration and thus reduce their cost. Using surrogate endpoints also put fewer trial participants at risk from adverse reactions to the test article. Examples of clinical and surrogate endpoints in clinical trials are various (see illustration). For instance, in cardiovascular trials, blood pressure and cholesterol levels are commonly used as surrogate measures, while the true clinical endpoints are myocardial infarction and death. The drug regulatory authority may request the use of a clinical endpoint, rather than a surrogate endpoint as the most important health indicator in a clinical trial for a specific disease. But such events are rare, and many participants need to be studied in confirmatory trials. However, in the exploratory early phase of a new therapy, it is common to use a surrogate endpoint. This reduces the sample size as Disease Disease causal Disease Clinical well as the pathway progression endpoint duration of the trial. Clinical endpoints measure the progression of the disease and directly measure clinical benefit to patient, say survival or curing a disease. A surrogate endpoint is a marker of the disease causal pathway and is assumed to reflect and correlate with the clinical endpoints. It is essential to have a comprehensive understanding of causal pathways of the disease process. For instance, do changes in measures from brain imaging precede changes in the true clinical endpoint in Alzheimer’s disease? The main reason for the failure of surrogate endpoints is that the surrogate does not play a crucial role in the pathway of the effect of the intervention. For example, an intervention could affect the surrogate endpoint, but not the clinical endpoint. Ultimately, test Intervention articles approval based on effects on a surrogate involves an extrapolation Disease Disease Causal Disease Clinical from experience with Pathway Progression endpoint existing products to an untested test article. This is seldom the case and relying on one single surrogate endpoint that focuses on intermediate effect is not a very safe pathway. A pilot trial evaluated four active drugs (Encainide, Ethmozine, Flecainide, Imipramine) against a placebo using the surrogate endpoint – asymptomatic arrhythmia – in 500 participants. Based on the results of this pilot trial, a full-scale trial began enrolling participants in 1987, and after less than one year of follow-up the Encainide and Flecainide groups of the trial were stopped because of a three-fold increase in mortality compared to the placebo. This example illustrates that a drug can mitigate disease symptoms – representing a surrogate endpoint – but over the long term can be associated with a negative clinical outcome (here, death). Many large-scale clinical trials have sought effective new treatments where the clinically important endpoint – such as cardiac arrest or death – is expected to be prevented. A trial of lipid-lowering therapy using a surrogate – serum lipid level – endpoint will need around 100 participants over 3 to 12 months. However, if the endpoint is the incidence of cardiovascular events, thousands of participants need to be Chapter 2. Most drug therapies have multiple effects, and, therefore, relying on a single surrogate endpoint that focuses on an intermediate effect is not a very safe pathway. One approach is to require new drug therapies in large, long-term clinical trials to assess their effects on clinical endpoints. The use of surrogate endpoints is in this way avoided, and major health endpoints are known prior to marketing. But such an approach slows the time to test article approval and clinical usage, which is a problem especially for severe diseases with no effective standard treatment and can be very expensive. In daily clinical practice, such drugs are prescribed not only for the relatively healthy and usually younger patients who enter clinical trials but also for patients with multiple diseases and for older patients. Rare, unexpected, serious side effects might not be detected during the course of clinical trials. Thus, the factual clinical effectiveness and/or safety may not be mirrored by clinical trials.
References
- Herdman MT, Sriboonvorakul N, Leopold SJ, et al. The role of previously unmeasured organic acids in the pathogenesis of severe malaria. Crit Care. 2015;19:317.
- Quackles R: Treatment of a case of priapism by cavernospongious anastomosis, Acta Urol Belg 32:5n13, 1964.
- Rotstein OD, Pruett TL, Simmons RL: Fibrin in peritonitis: V. Fibrin inhibits phagocytic killing of Escherichia coli by human polymorphonuclear leukocytes. Ann Surg 203:413, 1986.
- Oberman HA. Breast lesions in the adolescent female. In: Sommers SC, Rosen PP, eds. Pathology Annual, Part 1.