1.
Applied research is defined as research that specifically addresses existing problems or opportunities and is directly associated with practical problem-solving. Basic research is also connected with problem-solving but aims to solve perplexing questions or obtain new knowledge of an experimental or theoretical nature. An example of applied research would be making adjustments to current processes within a small organization. An example of basic research would be a researcher looking into how stress affects day to day activities (Cooper & Schindler, pg.
15). 2. The four characteristics of effective codes of ethics are introduced in the text as (1) regulative, (2) protective, (3) behavior specific, and (4) enforceable (Cooper & Schindler, pg. 42). Of the four characteristics, the most critical is enforceable. It is suggested that unless ethical codes and policies are being reinforced on a consistent basis, they would be of limited value in actually regulating unethical conduct.
(Cooper & Schindler, pg. 42)3. An operational definition is described as one that is stated in terms of specific criteria for testing or measurement. It must also refer to empirical standards in the terms of counting, measuring, or somehow gathering information through our senses (Cooper & Schindler, pg. 53). An operational definition for binge drinking is a pattern of (3 or more) excessive drinking of alcohol within a specific (2 hours) amount of time. An example of the bad operational definition would be the consumption of 3 drinks in one sitting. The definition listed is poor because although it has a number of drinks, there is no information on the time or how it can be measured.
4. The process of conducting a critical literature review begins with the reviewing of books, articles, or professional literature that are related to the management dilemma or topic (Cooper & Schindler, pg. 94). According to the text, there are five steps for critical literature review: (1) define the management question or dilemma, (2) consult encyclopedias, dictionaries, handbooks, and textbooks, (3) apply key terms, names, and events, (4) locate and review specific secondary sources, and (5) evaluate the value of each source and its content (Cooper & Schindler, pg. 95). The term critical is typically assigned to things that are mandatory or necessary to any other process, that is also the case here. The reviewing process not only helps to defend or disqualify questions, it also helps to ensure that every step of the process is considered.
The purpose of the review is to provide those who are reading the assignment with sources that support the dilemma. 5. The three types of evidence a researcher seeks when testing a causal hypothesis are, (1) adequate for its purpose, (2) testable, and (3) better than its rivals (Cooper & Schindler, pg.
61). Hypotheses help to guide the direction of the study and providing the framework for organizing results (Cooper & Schindler, pg. 60). Being adequate for its purpose is related to the hypothesis being relatable to the original issue and further explaining facts. Being testable is related to the hypothesis being able to be able to be connected to acceptable techniques and if it is simple. Being better than its rivals refers to the hypothesis explaining more facts and being accepted as one that is informed amongst its rivals(Cooper & Schindler, pg.
61). 6. The three forms of non-probability sampling are, (1) convenience sampling, (2) purposive sampling, and (3) snowball sampling. In the text, convenience sampling is defined as selecting readily available individuals as participants. Purposive sampling is defined as researchers choosing participants arbitrarily for unique characteristics, experiences, attitudes, and or perceptions. There are two types of purposive sampling: judgment and quota.
Snowball sampling allows participants to refer research to those who have similar to or different (from those from the original sample element) characteristics, experiences, and or attitudes. (Cooper & Schindler, pgs. 359-360). Within our project, it began as convenience sampling where we simply posted the survey on different social media sites for responses. As the week progressed, we noticed that we were getting more responses from females than males so we began to use purposive sampling where we choose people based on characteristics. I noticed a lot of consistency based on our own demographic backgrounds so I sent links to family members, friends and associates who I know have different backgrounds than I do. 7.
There are many strengths and limitations that are related to observation as a data collection method. The strengths are that it allows for securing information about people or activities that cannot be derived from experiments or surveys, avoiding participant filtering and forgetting , securing environmental context information, reducing obtrusiveness an optimizing the naturalness of the research setting (Cooper & Schindler, pg. 187).
The limitations associated with the observation as a data collection method are that there can be difficulty of waiting for long periods to capture the relevant marvels, the cost that are associated with hiring an observer and equipment, the reliability of inferences from surface indicators, and the limitation on presenting activities and implications about cognitive processes (Cooper & Schindler, pg. 187).8. Although there are many threats associated with internal validity, the seven that are mentioned in the text are (1) history, (2) maturation, (3) testing, (4) instrumentation, (5) selection, (6) statistical regression, and (7) Experimental mortality. There are events that occur during the course of the study that confuse the relationship being studied (pretest, manipulation, and posttest) within the history portion(Cooper & Schindler, pg. 201). Maturation relates to the design and how it may be weakened because change may also occur within the subject due to a function of the passage of time with no specification to any particular event (Cooper & Schindler, pg.
201). Testing can affect the scores of a second test(Cooper & Schindler, pg. 202). Instrumentation can pose a threat to internal validity due to changes between observations in the measuring instrument or the observer (Cooper & Schindler, pg. 202). Selection is an important threat to internal validity.
It may weaken the design because of the differential selection of subjects for experiential and control groups (Cooper & Schindler, pg. 202). Statistical regression is the factor that operates, most specifically, when groups have been selected by their extreme scores.
Finally, Experiment mortality occurs when the composition of the study group changes during testing. (Cooper & Schindler, pg. 202)9. In the text, nonresponse error is defined as one that develops when an interviewer cannot locate with whom the study requires communication or when those who the survey is intended to focus on, refuse to participate (Cooper & Schindler, pg. 346). Some ways to avoid nonresponse error is to test to make sure that there are no issues accessing or taking the survey, ensure that you target those who the survey is intended to focus on, and try to offer an incentive to have people take the survey. 10.
The four sources of error in measurement are, (1) respondent, (2) situational factors, (3) measurer, and (4) instrument (Cooper & Schindler, pg. 256). The respondent can be a major source of error because of many different personality characteristics. These are employee status, ethnic group membership, social class, and nearness to manufacturing facilities. An, example could be a survey that is about something that is opposite of their current mood or circumstances. An example of a situational factor is anything that places a tension on the interview or measurement. This could be a second individual present with an opinion that may distort or influence responses or other individuals (Cooper & Schindler, pg.
256). The measurer can produce error because of paraphrasing a question or even from inadvertent gestures (Cooper & Schindler, pg. 256). An example of instrument error could be that the defective tool or system omits certain questions or responses. 11.
Shared vocabulary requires that both the interviewer and the participant understand one another(Cooper & Schindler, pg. 330). Some effects of ignoring this could be confused participants and that can lead to misinterpretation of the true goal of survey. Bias wording is defined as distortion of responses in one direction (Cooper & Schindler, pg.
332). In the text word choice is a major source of biased wording in regards to the quality of resulting data. Because of the word structure, participants can be forced into being biased. (Cooper & Schindler, pg. 332).
Adequate alternatives allows rooms to capture all potential answers as closely as possible. Having too few options inhibits that. 12.
The characteristics of a good measuring tool are validity, reliability, and practicality. Validity refers to the extent to which a test measures what is intended to be measured, reliability refers to accuracy and precision of a measurement procedure, and practicality refers to factors like economy, convenience and interpretability(Cooper & Schindler, pg. 257). We will ensure that our survey instrument will display these characteristics by doing research on the tool itself to ensure that there are no issues. We will also test it out to see what needs to be fixed and adjusted before the official survey posts. Taking these steps helps to make sure that our survey meets these three requirements and offers us the information that we need to successfully complete the project. 13. Forced-choice rating scales require the participants to choose an alternative option and this route typically excludes the options “no opinion,” or “neutral” (Cooper & Schindler, pg.
272)Unforced- choice rating scales provide the opportunity to express opinions when the choices available are not substantial (Cooper & Schindler, pg. 272). Some tactics to minimize participant tendencies to avoid extreme judgements or choose extreme positions on a scale are to add more points on the scale or even add more descriptive adjectives to assist in answering the question. 14. The three types of questions can be used when multiple responses to a single question are desired from the respondent are free-response style, dichotomous, and multiple-choice questions (Cooper & Schindler, 2014). Free-response questions are also known as open-ended questions.
They allow the participant explain their response in a space provided (Cooper & Schindler, pg. 308). An example for free response questions typically start with, “in your own words, explain what you think this survey was about.
” Dichotomous style questions encourage opposing responses, which may be subject to alternatives depending on the wording of the question (Cooper & Schindler, pg. 308). An example of dichotomous style questions are is “I plan to relocate in the next year, yes or no”. Multiple choice style questions offer multiple alternatives, but ultimately the participant is forced to choose just one answer. (Cooper & Schindler, pg. 308). An example of a multiple choice style question would be “Pick your favorite restaurant in the Noda area” and it offers five different options.
15. The steps for drafting and refining a survey instrument are: (1) gather data, (2) compile the data, (3) eliminate excess data, and (4) review the data. Pilot testing is done to pinpoint weakness in the design and instrumentation and to also provide a small probability sample (Cooper & Schindler, pg. 85). Pretesting gives the opportunity to try out the instrument which then permits refinement before the final test administered (Cooper & Schindler, pg.
85). Within our pilot study we were able to follow the exact process mentioned above. We tested it out on a small amount of people and asked them directly, what they thought and what they would change. The suggestions allowed us to make a few adjustments to the wording and repetitiveness of some questions. It also gave us an idea of how people would respond to the questions and how we could begin compiling the information obtained.
16.The rationale behind using sampling over census for everyday study is that sampling selects some of the elements in a population and a census is a count of all elements in a population (Cooper & Schindler, pg. 338). Non-probability sampling is an arbitrary and subjective procedure where each population element does not have a known nonzero chance of being included (Cooper & Schindler, pg. 343).
Probability sampling is a controlled, randomized procedure that assures each population element is given a known non-zero chance of selection. They provide estimates of precision. Because of this, researchers typically choose probability sampling. 17. The four rules that guide the coding and categorization of a data set are: (1) appropriateness, (2) exhaustiveness, (3) mutual exclusivity, and (4) single dimension (Cooper & Schindler, pg.
383). Appropriateness is important to researchers because it is the best partitioning of the data for testing hypotheses and showing relationships and the availability of comparison data. Exhaustiveness allows researchers to add the “other” option because all possible answers cannot be anticipated. Mutual exclusivity allows researchers to place potential answers in one and only one cell in a category set. Lastly, single dimension allows researchers to define categories by one concept or construct (Cooper & Schindler, pg. 383). Each of these factors are important to coding and allow researchers to address specific issues that come up when choosing measurement questions.
18. I believe that the statement “Research is worthless if you can’t communicate your results in a way that others can understand them” is true but only to an extent. Research and using different tools to decipher through a problem or scenario is what makes the proposed results credible and valid. Although being able to display those results in a way that can be understood is important it does not take away from the research. Some steps that can aid in making sure that more people can understand is to identify the audience before conducting the research. Asking questions like “is the audience knowledgeable on this subject?”, can help gauge what tools to use. Throughout this program I’ve learned that visuals like graphs, charts, and tables should always be included. 19.
The six-step procedure for testing statistical significance is: (1) state the null hypothesis, (2) choose the statistical test, (3) select the desired level of significance, (4) compute the calculated difference value, (5) obtain the critical test value, and (6) interpret the test (Cooper & Schindler, pg. 438). Stating the null hypothesis is used specifically for statistical testing purposes. Choosing the statistical test allows the researcher(s) to test the hypothesis.
Choosing the level of significance is reflective of how much risk the researchers are willing to accept. Computing the calculated difference value is done after the data is collected. Obtaining the critical test value is done after the t, x2, or other measures have been computed and defines the region of rejection from the region of acceptance of the null hypothesis.
Lastly, interpreting the test is where researchers determine if they should reject the null or fail to reject the null (Cooper & Schindler, pg. 439).