Thursday, October 31, 2019

Macro Economics Essay Example | Topics and Well Written Essays - 500 words - 7

Macro Economics - Essay Example The increase in oil prices has led to the increase in production cost in these industries. Since Sweden is a big importer of oil, this increase will lead to a decrease in output and an increase in the rate of inflation. Since an increase in inflation rate reduces the level of unemployment, this increase in oil prices will also lead to a reduction in unemployment. The increase in oil prices cause capital input to reduce since it has become expensive to operate machinery and hence a decrease in the marginal product if labor. This will cause the short run and long run aggregate supply curves to shift leftwards. Since the increase does not affect the demand side components, the AD curve will not move. The nominal wages and prices will rise from a low point to a high point. The rise in price level will lead to a decrease in the real money supply. It will also lead to a rise in interest rates from a lower rate to a higher rate. This is a reduction in the investment factor aggregate demand. The consumption factor also decreases despite the government expenditure not changing. The shock caused by the supply side is similar to that caused by the demand side, the main difference being that it causes inflation and deflation1. In cases where the country would want to treat the shock caused by the supply side as it treats that caused by the demand side b y trying to stimulate the economy by using monetary or fiscal policies to shift the aggregate demand curve it will not succeed. This is because the nominal wages are only sticky downwards and attempting that would cause the inflation to increase further making the economy to deteriorate. In the short run, the aggregate supply curve will move to the left from short run AS1 to short run AS2. The intersection between the short run AS2 and AD 1 moves upwards towards the left. At to a higher point .At this high point, output decreases, and the price level increases. This forms short run

Tuesday, October 29, 2019

Auditing Essay Example | Topics and Well Written Essays - 750 words - 6

Auditing - Essay Example It has been noticed that frauds related to theft of inventory have a direct impact on the income statement of the company. Loss due to theft is directly proportion to decrease in profit (Week, 4 2012). There are following ways Mr. Franklin can reduce the probability of risk through theft. Access Control is like security measures these measures are taken so we can prohibit any kind of unauthorized entry in some restricted area (Audit Risk Assessment, Page 31), by this risk of theft of any asset can be reduced to a minimal level if the access is restricted to a minimal level then there is less probability of any kind of fraud or misrepresentation e.g. If there is only one person who is managing all cash related affairs and he is the only authorized person who have access to cash so, in such a scenario, the probability of theft will be low. It is necessary to count the assets periodically and then compare it with our records (Audit Risk Assessment, Page 381). It is quite essential to safeguard our assets from theft. It can provide you detail if there is some sort of difference between counted assets and recorded assets then we need an explanation. For that reason, first of all we need to understand the concept of materiality and we have to understand which category of goods is valuable to us (Week, 5 2012). It is a concept in which we use more than one person to complete a task; it means that we have to include different personal to execute a single transaction (Audit Risk Assessment, Page 380) so work of one individual is being cross checked by another individual. In such setup, there is a possibility that the risk of theft will reduce. But we have to make sure that no one is performing any duty which is mismatched. It is quite a good way for an internal control. The other risk the hospitality industry was exposed to a risk of fiddles or

Sunday, October 27, 2019

Factors Affecting Web Applications Maintenance

Factors Affecting Web Applications Maintenance Chapter 1 1.1 Introduction Software engineering [PRE01] is the process associated with industrial quality software development, the methods used to analyze, design test computer Software, the management techniques associated with the control monitoring of Software projects the tools used to support process, methods, techniques. In Software Development Life Cycle, the focus is on the activities like feasibility study, requirement analysis, design, coding, testing, maintenance. Feasibility study involves the issues like technical/economical/ behavioral feasibility of project. Requirement analysis [DAV93] emphasizes on identifying the needs of the system producing the Software Requirements Specification document (SRS), [JAL04] that describes all data, functional behavioral requirements, constraints, validation requirements for Software. Software Design is to plan a solution of the problem specified by the SRS document, a step in moving from the problem domain to the solution domain. The output of this phase is the design document. Coding is to translate the design of the system into code in a programming language. Testing is the process to detect defects minimize the risk associated with the residual defects. The activities carried out after the delivery of the software comprises the maintenance phase. 1.2 Evolution of Software Testing Discipline The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. The term software engineering was first used at a 1968 NATO workshop in West Germany. It focused on the growing software crisis. Thus we see that the software crisis on quality, reliability, high costs etc. started way back when most of todays software testers were not even born. The attitude towards Software Testing [BEI90] underwent a major positive change in the recent years. In the 1950s when Machine languages were used, testing was nothing but debugging. When in the 1960s, compilers were developed, testing started to be considered a separate activity from debugging. In the 1970s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline. Over the last two decades there has been an increased focus on better, faster and cost-effective software. Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice. Now to answer, What is Testing? we can go by the famous definition of Myers [MYE79], which says, Testing is the process of executing a program with the intent of finding errors. According to Humphrey, software testing is defined as, the execution of a program to find its faults. Testing is the process to prove that the software works correctly [PRA06]. Software testing is a crucial aspect of the software life cycle. In some form or the other it is present at each phase of (any) software development or maintenance model. The importance of software testing and its impact on software cannot be underestimated. Software testing is a fundamental component of software quality assurance and represents a review of specification, design and coding. The greater visibility of software systems and the cost associated with software failure are motivating factors for planning, through testing. It is not uncommon for a software organization to spend 40-50% of its effort on testing. During testing, the software engineering produces a series of test cases that are used to rip apart the software they have produced. Testing is the one step in the software process that can be seen by the developer as destructive instead of constructive. Software engineers are typically constructive people and testing requires them to overcome preconceived concepts of correctness and deal with conflicts when errors are identified. A successful test is one that finds a defect. This sounds simple enough, but there is much to consider when we want to do software testing. Besides finding faults, we may also be interested in testing performance, safety, fault-tolerance or security. Testing often becomes a question of economics. For projects of a large size, more testing will usually reveal more bugs. The question then becomes when to stop testing, and what is an acceptable level of bugs. This is the question of good enough software. Testing is the process of verifying that a product meets all requirements. A test is never complete. When testing software the goal should never be a product completely free from defects, because its impossible. According to Peter Nielsen, The average is 16 faults per 1000 lines of code when the programmer has tested his code and it is believed to be correct. When looking at a larger project, there are millions of lines of code, which makes it impossible to find all present faults. Far too often products are released on the market with poor quality. Errors are often uncovered by users, and in that stage the cost of removing errors is large in amount. 1.3 Objectives of Testing Glen Myers [MYE79] states a number of rules that can serve well as testing objectives: Testing is a process of executing a program with the intent of finding an error. A good test is one that has a high probability of finding an as yet undiscovered error. A successful test is one that uncovers an as yet undiscovered error. The objective is to design tests that systematically uncover different classes of errors do so with a minimum amount of time effort. Secondary benefits include Demonstrate that Software functions appear to be working according to specification. That performance requirements appear to have been met. Data collected during testing provides a good indication of Software reliability some indication of Software quality. Testing cannot show the absence of defects, it can only show that Software defects are present. 1.4 Software Testing Its Relation with Software Life Cycle Software testing should be thought of as an integral part of the Software process an activity that must be carried out throughout the life cycle. Each phase in the Software lifecycle has a clearly different end product such as the Software requirements specification (SRS) documentation, program unit design program unit code. Each end product can be checked for conformance with a previous phase against the original requirements. Thus, errors can be detected at each phase of development. Validation Verification should occur throughout the Software lifecycle. Verification is the process of evaluating each phase end product to ensure consistency with the end product of the previous phase. Validation is the process of testing Software, or a specification, to ensure that it matches user requirements. Software testing is that part of validation verification associated with evaluating analysing program code. It is one of the two most expensive stages within the Software lifecycle, the other being maintenance. Software testing of a product begins after the development of the program units continues until the product is obsolete. Testing fixing can be done at any stage in the life cycle. However, the cost of finding fixing errors increases dramatically as development progresses. Changing a Requirements document during the first review is inexpensive. It costs more when requirements change after the code has been written: the code must be rewritten. Bug fixes are much cheaper when programmers find their own errors. Fixing an error before releasing a program is much cheaper than sending new disks, or even a technician to each customers site to fix it later. It is illustrated in Figure 1.1. The types of testing required during several phases of Software lifecycle are described below: Requirements Requirements must be reviewed with the client; rapid prototyping can refine requirements accommodate changing requirements. Specification The specifications document must be checked for feasibility, traceability, completeness, absence of contradictions ambiguities. Specification reviews (walkthroughs or inspections) are especially effective. Design Design reviews are similar to specification reviews, but more technical. The design must be checked for logic faults, interface faults, lack of exception handling, non-conformance to specifications. Implementation Code modules are informally tested by the programmer while they are being implemented (desk checking). Thereafter, formal testing of modules is done methodically by a testing team. This formal testing can include non-execution-based methods (code inspections walkthroughs) execution-based methods (black-box testing, white-box testing). Integration Integration testing is performed to ensure that the modules combine together correctly to achieve a product that meets its specifications. Particular care must be given to the interfaces between modules. The appropriate order of combination must be determined as top-down, bottom-up, or a combination thereof. Product Testing The functionality of the product as a whole is checked against its specifications. Test cases are derived directly from the specifications document. The product is also tested for robustness (error-handling capabilities stress tests). All source code documentation are checked for completeness consistency. Acceptance Testing The Software is delivered to the client, who tests the Software on the actual h/w, using actual data instead of test data. A product cannot be considered to satisfy its specifications until it has passed an acceptance test. Commercial off-the-shelf (or shrink-wrapped) Software usually undergoes alpha beta testing as a form of acceptance test. Maintenance Modified versions of the original product must be tested to ensure that changes have been correctly implemented. Also, the product must be tested against previous test cases to ensure that no inadvertent changes have been introduced. This latter consideration is termed regression testing. Software Process Management The Software process management plan must undergo scrutiny. It is especially important that cost duration estimates be checked thoroughly. If left unchecked, errors can propagate through the development lifecycle amplify in number cost. The cost of detecting fixing an error is well documented is known to be more costly as the system develops. An error found during the operation phase is the most costly to fix. 1.5 Principles of Software Testing Software testing is an extremely creative intellectually challenging task. The following are some important principles [DAV95] that should be kept in mind while carrying Software testing [PRE01] [SUM02]: Testing should be based on user requirements: This is in order to uncover any defects that might cause the program or system to fail to meet the clients requirements. Testing time resources are limited: Avoid redundant tests. It is impossible to test everything: Exhaustive tests of all possible scenarios are impossible, because of the many different variables affecting the system the number of paths a program flow might take. Use effective resources to test: This represents use of the most suitable tools, procedures individuals to conduct the tests. Only those tools should be used by the test team that they are confident familiar with. Testing procedures should be clearly defined. Testing personnel may be a technical group of people independent of the developers. Test planning should be done early: This is because test planning can begin independently of coding as soon as the client requirements are set. Test for invalid unexpected input conditions as well as valid conditions: The program should generate correct messages when an invalid test is encountered should generate correct results when the test is valid. The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found. Testing should begin at the module: The focus of testing should be concentrated on the smallest programming units first then expand to other parts of the system. Testing must be done by an independent party: Testing should not be performed by the person or team that developed the Software since they tend to defend the correctness of the program. Assign best personnel to the task: Because testing requires high creativity responsibility only the best personnel must be assigned to design, implement, analyze test cases, test data test results. Testing should not be planned under the implicit assumption that no errors will be found. Testing is the process of executing Software with the intention of finding errors. Keep Software static during test: The program must not be modified during the implementation of the set of designed test cases. Document test cases test results. Provide expected test results if possible: A necessary part of test documentation is the specification of expected results, even though it is impractical. 1.6 Software Testability Its Characteristics Testability is the ability of Software (or program) with which it can easily be tested [PRE01] [SUM02]. The following are some key characteristics of testability: The better it works, the more efficient is testing process. What you see is what you test (WYSIWYT). The better it is controlled, the more we can automate or optimize the testing process. By controlling the scope of testing we can isolate problems perform smarter retesting. The less there is to test, the more quickly we can test it. The fewer the changes, the fewer the disruptions to testing. The more information we have, the smarter we will test. 1.7 Stages in Software Testing Process Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are composed of procedures functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation. The most widely used testing process consists of five stages that are illustrated in Table 1.1. Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed back from later stages to earlier parts of the process. The iterative testing process is illustrated in Figure 1.2 and described below: Unit Testing: Unit testing is code-oriented testing. Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components. Module Testing: A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures functions. A module encapsulates related components so it can be tested without other system modules. Sub-system (Integration) Testing: This phase involves testing collections of modules, which have been integrated into sub-systems. It is a design-oriented testing is also known as integration testing. Sub-systems may be independently designed implemented. The most common problems, which arise in large Software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces. System Testing: The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems system components. It is also concerned with validating that the system meets its functional non-functional requirements. Acceptance Testing: This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors omissions in the systems requirements definition (user-oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non-functional) is unacceptable. 1.8 The V-model of Testing To test an entire software system, tests on different levels are performed. The V model [FEW99], shown in figure 1.3, illustrates the hierarchy of tests usually performed in software development projects. The left part of the V represents the documentation of an application, which are the Requirement specification, the Functional specification, System design, the Unit design. Code is written to fulfill the requirements in these specifications, as illustrated in the bottom of the V. The right part of the V represents the test activities that are performed during development to ensure that an application corresponding to its requirements. Unit tests are used to test that all functions and methods in a module are working as intended. When the modules have been tested, they are combined and integration tests are used to test that they work together as a group. The unit- and integration test complement the system test. System testing is done on a complete system to validate that it corresponds to the system specification. A system test includes checking if all functional and all non-functional requirements have been met. Unit, integration and system tests are developer focused, while acceptance tests are customer focused. Acceptance testing checks that the system contains the functionality requested by the customer, in the Requirement specification. Customers are usually responsible for the acceptance tests since they are the only persons qualified to make the judgment of approval. The purpose of the acceptance tests is that after they are preformed, the customer knows which parts of the Requirement specification the system satisfies. 1.9 The Testing Techniques To perform these types of testing, there are three widely used testing techniques. The above said testing types are performed based on the following testing techniques: Black-Box testing technique Black box testing (Figure 1.4) is concerned only with testing the specification. It cannot guarantee that the complete specification has been implemented. Thus black box testing is testing against the specification and will discover faultsofomission, indicating that part of the specification has not been fulfilled. It is used for testing based solely on analysis of requirements (specification, user documentation). In Black box testing, test cases are designed using only the functional specification of the software i.e without any knowledge of the internal structure of the software. For this reason, black-box testing is also known as functional testing. Black box tests are performed to assess how well a program meets its requirements, looking for missing or incorrect functionality. Functional testing typically exercise code with valid or nearly valid input for which the expected output is known. This includes concepts such as boundary values. Performance tests evaluate response time, memory usage, throughput, device utilization, and execution time. Stress tests push the system to or beyond its specified limits to evaluate its robustness and error handling capabilities. Reliability tests monitor system response to represent user input, counting failures over time to measure or certify reliability. Black box Testing refers to analyzing a running program by probing it with various inputs. This kind of testing requires only a running program and does not make use of source code testing of any kind. In the security paradigm, malicious input can be supplied to the program in an effort to cause it to break. If the program breaks during a particular test, then a security problem may have been discovered. Black box testing is possible even without access to binary code. That is, a program can be tested remotely over a network. All that is required is a program running somewhere that is accepting input. If the tester can supply input that the program consumes (and can observe the effect of the test), then black box testing is possible. This is one reason that real attackers often resort to black box techniques. Black box testing is not an alternative to white box techniques. It is a complementary approach that is likely to uncover a different type of errors that the white box approaches. Black box testing tries to find errors in the following categories: Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors, and Initialization and termination errors. By applying black box approaches we produce a set of test cases that fulfill requirements: Test cases that reduce the number of test cases to achieve reasonable testing Test cases that tell us something about the presence or absence of classes of errors. The methodologies used for black box testing have been discussed below: 1.9.1.1 Equivalent Partitioning Equivalence partitioning is a black box testing approach that splits the input domain of a program into classes of data from which test cases can be produced. An ideal test case uncovers a class of errors that may otherwise before the error is detected. Equivalence partitioning tries to outline a test case that identifies classes of errors. Test case design for equivalent partitioning is founded on an evaluation of equivalence classes for an input condition [BEI95]. An equivalence class depicts a set of valid or invalid states for the input condition. Equivalence classes can be defined based on the following [PRE01]: If an input condition specifies a range, one valid and two invalid equivalence classes are defined. If an input condition needs a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined. If an input condition is Boolean, one valid and invalid class is outlined. 1.9.1.2 Boundary Value Analysis A great many errors happen at the boundaries of the input domain and for this reason boundary value analysis was developed. Boundary value analysis is test case design approach that complements equivalence partitioning. BVA produces test cases from the output domain also [MYE79]. Guidelines for BVA are close to those for equivalence partitioning [PRE01]: If an input condition specifies a range bounded by values a and b, test cases should be produced with values a and b, just above and just below a and b, respectively. If an input condition specifies various values, test cases should be produced to exercise the minimum and maximum numbers. Apply guidelines above to output conditions. If internal program data structures have prescribed boundaries, produce test cases to exercise that data structure at its boundary. White-Box testing technique White box testing (Figure 1.5) is testing against the implementation as it is based on analysis of internal logic (design, code etc.) and will discover faultsofcommission, indicating that part of the implementation is faulty. Designing white-box test cases requires thorough knowledge of the internal structure of software, and therefore the white-box testing is also called the structural testing. White box testing is performed to reveal problems with the internal structure of a program. A common goal of white-box testing is to ensure a test case exercises every path through a program. A fundamental strength that all white box testing strategies share is that the entire software implementation is taken into account during testing, which facilitates error detection even when the software specification is vague or incomplete. The effectiveness or thoroughness of white-box testing is commonly expressed in terms of test or code coverage metrics, which measure the fraction of code exercised by test cases. White box Testing involves analyzing and understanding source code. Sometimes only binary code is available, but if you decompile a binary to get source code and then study the code, this can be considered a kind of white box testing as well. White box testing is typically very effective in finding programming errors and implementation errors in software. In some cases this activity amounts to pattern matching and can even be automated with a static analyzer. White box testing is a test case design approach that employs the control architecture of the procedural design to produce test cases. Using white box testing approaches, the software engineering can produce test cases that: Guarantee that all independent paths in a module have been exercised at least once Exercise all logical decisions Execute all loops at their boundaries and in their operational bounds Exercise internal data structures to maintain their validity. There are several methodologies used for white box testing. We discuss some important ones below. 1.9.2.1 Statement Coverage The statement coverage methodology aims to design test cases so as to force the executions of every statement in a program at least once. The principal idea governing the statement coverage methodology is that unless a statement is executed, we have way of determining if an error existed in that statement. In other words, the statement coverage criterion [RAP85] is based on the observation that an error existing in one part of a program cannot be discovered if the part of the program containing the error and generating the failure is not executed. However, executed a statement once and that too for just one input value and observing that it behaves properly for that input value is no guarantee that it will behave correctly for all inputs. 1.9.2.2 Branch Coverage In branch coverage testing, test cases are designed such that the different branch conditions are given true and false values in turn. It is obvious that branch testing guarantees statement coverage and thus is a stronger testing criterion than the statement coverage testing [RAP85]. 1.9.2.3 Path Coverage The path coverage based testing strategy requires designing test cases such that all linearly independents paths in the program are executed at least once. A linearly independent path is defined in terms of the control flow graph (CFG) of the program. 1.9.2.4 Loop testing Loops are very important constructs for generally all the algorithms. Loop testing is a white box testing technique. It focuses exclusively on the validity of loop constructs. Simple loop, concatenated loop, nested loop, and unstructured loop are four different types of loops [BEI90] as shown in figure 1.6. Simple Loop: The following set of tests should be applied to simple loop where n is the maximum number of allowable passes thru the loop: Skip the loop entirely. Only one pass thru the loop. Two passes thru the loop. M passes thru the loop where m N-1, n, n+1 passes thru the loop. Nested Loop: Beizer [BEI90] approach to the nested loop Start at the innermost loop. Set all other loops to minimum value. Conduct the simple loop test for the innermost loop while holding the outer loops at their minimum iteration parameter value. Work outward, conducting tests for next loop, but keeping all other outer loops at minimum values and other nested loops to typical values. Continue until all loops have been tested. Concatenated loops: These can be tested using the approach of simple loops if each loop is independent of other. However, if the loop counter of loop 1 is used as the initial value for loop 2 then approach of nested loop is to be used. Unstructured loop: This class of loops should be redesigned to reflect the use of the structured programming constructs. 1.9.2.5 McCabes Cyclomatic Complexity The McCabes Cyclomatic Complexity [MCC76] of a program defines the number of independent paths in a program. Given a control flow Graph G of a program, the McCabes Cyclomatic Complexity V(G) can be computed as: V(G)=E-N+2 Where E is the number of edges in the control flow graph and N is the number of nodes of the control flow graph. The cyclomatic complexity value of a program defines the number of independent paths in the basis set of the program and provides a lower bound for the number of test cases that must be conducted to ensure that all statements have been executed at least once. Knowing the number of test cases required does not make it easy to derive the test cases, it only gives an indication of the minimum number of test cases required. The following is the sequences of steps that need to be undertaken for deriving the path coverage based test case of a program. Draw the CFG. Calculate Cyclomatic Complexity V(G). Calculate the basis set of linearly independent paths. Prepare a test case that will force execution of each path in the basis set. 1.9.2.6 Data Flow based Testing The data flow testing method chooses test paths of a program based on the locations of definitions and uses of variables in the program. Various data flow testing approaches have been examined [FRA88] [NTA88] [FRA93]. For data flow testing each statement in program is allocated a unique statement number and that each function does not alter its parameters or global variables. For a statement with S as its statement number, DEF(S) = {X| statement S contains a definition of X} USE(S) = {X| statement S contains a use of X} If statement S is if or loop statement, its DEF set is left empty and its USE set is founded on the condition of statement S. The definition of a variable X at statement S is live at statement S, if there exists a path from statement S to S which does not contain any condition of X. A definition-use chain (or DU chain) of variable X is of the type [X,S,S] where S and S are statement numbers, X is in DEF(S), USE(S), and the definition of X in statement S is live at statement S. One basic data flow testing strategy is that each DU chain be covered at least once. Data flow testing strategies are helpful for choosing test paths of a program including nested if and loop statements 1.9.3 Grey-Box testing technique Grey box testing [BIN99] designs test cases using both responsibility-based (black box) and implementation-based (white box) approaches. To completely test a web application one needs to combine the two approaches, White-box and Black-box testing. It is used for testing of Web based applications. The Gray-box testing approach takes into account all components ma Factors Affecting Web Applications Maintenance Factors Affecting Web Applications Maintenance Chapter 1 1.1 Introduction Software engineering [PRE01] is the process associated with industrial quality software development, the methods used to analyze, design test computer Software, the management techniques associated with the control monitoring of Software projects the tools used to support process, methods, techniques. In Software Development Life Cycle, the focus is on the activities like feasibility study, requirement analysis, design, coding, testing, maintenance. Feasibility study involves the issues like technical/economical/ behavioral feasibility of project. Requirement analysis [DAV93] emphasizes on identifying the needs of the system producing the Software Requirements Specification document (SRS), [JAL04] that describes all data, functional behavioral requirements, constraints, validation requirements for Software. Software Design is to plan a solution of the problem specified by the SRS document, a step in moving from the problem domain to the solution domain. The output of this phase is the design document. Coding is to translate the design of the system into code in a programming language. Testing is the process to detect defects minimize the risk associated with the residual defects. The activities carried out after the delivery of the software comprises the maintenance phase. 1.2 Evolution of Software Testing Discipline The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. The term software engineering was first used at a 1968 NATO workshop in West Germany. It focused on the growing software crisis. Thus we see that the software crisis on quality, reliability, high costs etc. started way back when most of todays software testers were not even born. The attitude towards Software Testing [BEI90] underwent a major positive change in the recent years. In the 1950s when Machine languages were used, testing was nothing but debugging. When in the 1960s, compilers were developed, testing started to be considered a separate activity from debugging. In the 1970s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline. Over the last two decades there has been an increased focus on better, faster and cost-effective software. Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice. Now to answer, What is Testing? we can go by the famous definition of Myers [MYE79], which says, Testing is the process of executing a program with the intent of finding errors. According to Humphrey, software testing is defined as, the execution of a program to find its faults. Testing is the process to prove that the software works correctly [PRA06]. Software testing is a crucial aspect of the software life cycle. In some form or the other it is present at each phase of (any) software development or maintenance model. The importance of software testing and its impact on software cannot be underestimated. Software testing is a fundamental component of software quality assurance and represents a review of specification, design and coding. The greater visibility of software systems and the cost associated with software failure are motivating factors for planning, through testing. It is not uncommon for a software organization to spend 40-50% of its effort on testing. During testing, the software engineering produces a series of test cases that are used to rip apart the software they have produced. Testing is the one step in the software process that can be seen by the developer as destructive instead of constructive. Software engineers are typically constructive people and testing requires them to overcome preconceived concepts of correctness and deal with conflicts when errors are identified. A successful test is one that finds a defect. This sounds simple enough, but there is much to consider when we want to do software testing. Besides finding faults, we may also be interested in testing performance, safety, fault-tolerance or security. Testing often becomes a question of economics. For projects of a large size, more testing will usually reveal more bugs. The question then becomes when to stop testing, and what is an acceptable level of bugs. This is the question of good enough software. Testing is the process of verifying that a product meets all requirements. A test is never complete. When testing software the goal should never be a product completely free from defects, because its impossible. According to Peter Nielsen, The average is 16 faults per 1000 lines of code when the programmer has tested his code and it is believed to be correct. When looking at a larger project, there are millions of lines of code, which makes it impossible to find all present faults. Far too often products are released on the market with poor quality. Errors are often uncovered by users, and in that stage the cost of removing errors is large in amount. 1.3 Objectives of Testing Glen Myers [MYE79] states a number of rules that can serve well as testing objectives: Testing is a process of executing a program with the intent of finding an error. A good test is one that has a high probability of finding an as yet undiscovered error. A successful test is one that uncovers an as yet undiscovered error. The objective is to design tests that systematically uncover different classes of errors do so with a minimum amount of time effort. Secondary benefits include Demonstrate that Software functions appear to be working according to specification. That performance requirements appear to have been met. Data collected during testing provides a good indication of Software reliability some indication of Software quality. Testing cannot show the absence of defects, it can only show that Software defects are present. 1.4 Software Testing Its Relation with Software Life Cycle Software testing should be thought of as an integral part of the Software process an activity that must be carried out throughout the life cycle. Each phase in the Software lifecycle has a clearly different end product such as the Software requirements specification (SRS) documentation, program unit design program unit code. Each end product can be checked for conformance with a previous phase against the original requirements. Thus, errors can be detected at each phase of development. Validation Verification should occur throughout the Software lifecycle. Verification is the process of evaluating each phase end product to ensure consistency with the end product of the previous phase. Validation is the process of testing Software, or a specification, to ensure that it matches user requirements. Software testing is that part of validation verification associated with evaluating analysing program code. It is one of the two most expensive stages within the Software lifecycle, the other being maintenance. Software testing of a product begins after the development of the program units continues until the product is obsolete. Testing fixing can be done at any stage in the life cycle. However, the cost of finding fixing errors increases dramatically as development progresses. Changing a Requirements document during the first review is inexpensive. It costs more when requirements change after the code has been written: the code must be rewritten. Bug fixes are much cheaper when programmers find their own errors. Fixing an error before releasing a program is much cheaper than sending new disks, or even a technician to each customers site to fix it later. It is illustrated in Figure 1.1. The types of testing required during several phases of Software lifecycle are described below: Requirements Requirements must be reviewed with the client; rapid prototyping can refine requirements accommodate changing requirements. Specification The specifications document must be checked for feasibility, traceability, completeness, absence of contradictions ambiguities. Specification reviews (walkthroughs or inspections) are especially effective. Design Design reviews are similar to specification reviews, but more technical. The design must be checked for logic faults, interface faults, lack of exception handling, non-conformance to specifications. Implementation Code modules are informally tested by the programmer while they are being implemented (desk checking). Thereafter, formal testing of modules is done methodically by a testing team. This formal testing can include non-execution-based methods (code inspections walkthroughs) execution-based methods (black-box testing, white-box testing). Integration Integration testing is performed to ensure that the modules combine together correctly to achieve a product that meets its specifications. Particular care must be given to the interfaces between modules. The appropriate order of combination must be determined as top-down, bottom-up, or a combination thereof. Product Testing The functionality of the product as a whole is checked against its specifications. Test cases are derived directly from the specifications document. The product is also tested for robustness (error-handling capabilities stress tests). All source code documentation are checked for completeness consistency. Acceptance Testing The Software is delivered to the client, who tests the Software on the actual h/w, using actual data instead of test data. A product cannot be considered to satisfy its specifications until it has passed an acceptance test. Commercial off-the-shelf (or shrink-wrapped) Software usually undergoes alpha beta testing as a form of acceptance test. Maintenance Modified versions of the original product must be tested to ensure that changes have been correctly implemented. Also, the product must be tested against previous test cases to ensure that no inadvertent changes have been introduced. This latter consideration is termed regression testing. Software Process Management The Software process management plan must undergo scrutiny. It is especially important that cost duration estimates be checked thoroughly. If left unchecked, errors can propagate through the development lifecycle amplify in number cost. The cost of detecting fixing an error is well documented is known to be more costly as the system develops. An error found during the operation phase is the most costly to fix. 1.5 Principles of Software Testing Software testing is an extremely creative intellectually challenging task. The following are some important principles [DAV95] that should be kept in mind while carrying Software testing [PRE01] [SUM02]: Testing should be based on user requirements: This is in order to uncover any defects that might cause the program or system to fail to meet the clients requirements. Testing time resources are limited: Avoid redundant tests. It is impossible to test everything: Exhaustive tests of all possible scenarios are impossible, because of the many different variables affecting the system the number of paths a program flow might take. Use effective resources to test: This represents use of the most suitable tools, procedures individuals to conduct the tests. Only those tools should be used by the test team that they are confident familiar with. Testing procedures should be clearly defined. Testing personnel may be a technical group of people independent of the developers. Test planning should be done early: This is because test planning can begin independently of coding as soon as the client requirements are set. Test for invalid unexpected input conditions as well as valid conditions: The program should generate correct messages when an invalid test is encountered should generate correct results when the test is valid. The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found. Testing should begin at the module: The focus of testing should be concentrated on the smallest programming units first then expand to other parts of the system. Testing must be done by an independent party: Testing should not be performed by the person or team that developed the Software since they tend to defend the correctness of the program. Assign best personnel to the task: Because testing requires high creativity responsibility only the best personnel must be assigned to design, implement, analyze test cases, test data test results. Testing should not be planned under the implicit assumption that no errors will be found. Testing is the process of executing Software with the intention of finding errors. Keep Software static during test: The program must not be modified during the implementation of the set of designed test cases. Document test cases test results. Provide expected test results if possible: A necessary part of test documentation is the specification of expected results, even though it is impractical. 1.6 Software Testability Its Characteristics Testability is the ability of Software (or program) with which it can easily be tested [PRE01] [SUM02]. The following are some key characteristics of testability: The better it works, the more efficient is testing process. What you see is what you test (WYSIWYT). The better it is controlled, the more we can automate or optimize the testing process. By controlling the scope of testing we can isolate problems perform smarter retesting. The less there is to test, the more quickly we can test it. The fewer the changes, the fewer the disruptions to testing. The more information we have, the smarter we will test. 1.7 Stages in Software Testing Process Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are composed of procedures functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation. The most widely used testing process consists of five stages that are illustrated in Table 1.1. Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed back from later stages to earlier parts of the process. The iterative testing process is illustrated in Figure 1.2 and described below: Unit Testing: Unit testing is code-oriented testing. Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components. Module Testing: A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures functions. A module encapsulates related components so it can be tested without other system modules. Sub-system (Integration) Testing: This phase involves testing collections of modules, which have been integrated into sub-systems. It is a design-oriented testing is also known as integration testing. Sub-systems may be independently designed implemented. The most common problems, which arise in large Software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces. System Testing: The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems system components. It is also concerned with validating that the system meets its functional non-functional requirements. Acceptance Testing: This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors omissions in the systems requirements definition (user-oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non-functional) is unacceptable. 1.8 The V-model of Testing To test an entire software system, tests on different levels are performed. The V model [FEW99], shown in figure 1.3, illustrates the hierarchy of tests usually performed in software development projects. The left part of the V represents the documentation of an application, which are the Requirement specification, the Functional specification, System design, the Unit design. Code is written to fulfill the requirements in these specifications, as illustrated in the bottom of the V. The right part of the V represents the test activities that are performed during development to ensure that an application corresponding to its requirements. Unit tests are used to test that all functions and methods in a module are working as intended. When the modules have been tested, they are combined and integration tests are used to test that they work together as a group. The unit- and integration test complement the system test. System testing is done on a complete system to validate that it corresponds to the system specification. A system test includes checking if all functional and all non-functional requirements have been met. Unit, integration and system tests are developer focused, while acceptance tests are customer focused. Acceptance testing checks that the system contains the functionality requested by the customer, in the Requirement specification. Customers are usually responsible for the acceptance tests since they are the only persons qualified to make the judgment of approval. The purpose of the acceptance tests is that after they are preformed, the customer knows which parts of the Requirement specification the system satisfies. 1.9 The Testing Techniques To perform these types of testing, there are three widely used testing techniques. The above said testing types are performed based on the following testing techniques: Black-Box testing technique Black box testing (Figure 1.4) is concerned only with testing the specification. It cannot guarantee that the complete specification has been implemented. Thus black box testing is testing against the specification and will discover faultsofomission, indicating that part of the specification has not been fulfilled. It is used for testing based solely on analysis of requirements (specification, user documentation). In Black box testing, test cases are designed using only the functional specification of the software i.e without any knowledge of the internal structure of the software. For this reason, black-box testing is also known as functional testing. Black box tests are performed to assess how well a program meets its requirements, looking for missing or incorrect functionality. Functional testing typically exercise code with valid or nearly valid input for which the expected output is known. This includes concepts such as boundary values. Performance tests evaluate response time, memory usage, throughput, device utilization, and execution time. Stress tests push the system to or beyond its specified limits to evaluate its robustness and error handling capabilities. Reliability tests monitor system response to represent user input, counting failures over time to measure or certify reliability. Black box Testing refers to analyzing a running program by probing it with various inputs. This kind of testing requires only a running program and does not make use of source code testing of any kind. In the security paradigm, malicious input can be supplied to the program in an effort to cause it to break. If the program breaks during a particular test, then a security problem may have been discovered. Black box testing is possible even without access to binary code. That is, a program can be tested remotely over a network. All that is required is a program running somewhere that is accepting input. If the tester can supply input that the program consumes (and can observe the effect of the test), then black box testing is possible. This is one reason that real attackers often resort to black box techniques. Black box testing is not an alternative to white box techniques. It is a complementary approach that is likely to uncover a different type of errors that the white box approaches. Black box testing tries to find errors in the following categories: Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors, and Initialization and termination errors. By applying black box approaches we produce a set of test cases that fulfill requirements: Test cases that reduce the number of test cases to achieve reasonable testing Test cases that tell us something about the presence or absence of classes of errors. The methodologies used for black box testing have been discussed below: 1.9.1.1 Equivalent Partitioning Equivalence partitioning is a black box testing approach that splits the input domain of a program into classes of data from which test cases can be produced. An ideal test case uncovers a class of errors that may otherwise before the error is detected. Equivalence partitioning tries to outline a test case that identifies classes of errors. Test case design for equivalent partitioning is founded on an evaluation of equivalence classes for an input condition [BEI95]. An equivalence class depicts a set of valid or invalid states for the input condition. Equivalence classes can be defined based on the following [PRE01]: If an input condition specifies a range, one valid and two invalid equivalence classes are defined. If an input condition needs a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined. If an input condition is Boolean, one valid and invalid class is outlined. 1.9.1.2 Boundary Value Analysis A great many errors happen at the boundaries of the input domain and for this reason boundary value analysis was developed. Boundary value analysis is test case design approach that complements equivalence partitioning. BVA produces test cases from the output domain also [MYE79]. Guidelines for BVA are close to those for equivalence partitioning [PRE01]: If an input condition specifies a range bounded by values a and b, test cases should be produced with values a and b, just above and just below a and b, respectively. If an input condition specifies various values, test cases should be produced to exercise the minimum and maximum numbers. Apply guidelines above to output conditions. If internal program data structures have prescribed boundaries, produce test cases to exercise that data structure at its boundary. White-Box testing technique White box testing (Figure 1.5) is testing against the implementation as it is based on analysis of internal logic (design, code etc.) and will discover faultsofcommission, indicating that part of the implementation is faulty. Designing white-box test cases requires thorough knowledge of the internal structure of software, and therefore the white-box testing is also called the structural testing. White box testing is performed to reveal problems with the internal structure of a program. A common goal of white-box testing is to ensure a test case exercises every path through a program. A fundamental strength that all white box testing strategies share is that the entire software implementation is taken into account during testing, which facilitates error detection even when the software specification is vague or incomplete. The effectiveness or thoroughness of white-box testing is commonly expressed in terms of test or code coverage metrics, which measure the fraction of code exercised by test cases. White box Testing involves analyzing and understanding source code. Sometimes only binary code is available, but if you decompile a binary to get source code and then study the code, this can be considered a kind of white box testing as well. White box testing is typically very effective in finding programming errors and implementation errors in software. In some cases this activity amounts to pattern matching and can even be automated with a static analyzer. White box testing is a test case design approach that employs the control architecture of the procedural design to produce test cases. Using white box testing approaches, the software engineering can produce test cases that: Guarantee that all independent paths in a module have been exercised at least once Exercise all logical decisions Execute all loops at their boundaries and in their operational bounds Exercise internal data structures to maintain their validity. There are several methodologies used for white box testing. We discuss some important ones below. 1.9.2.1 Statement Coverage The statement coverage methodology aims to design test cases so as to force the executions of every statement in a program at least once. The principal idea governing the statement coverage methodology is that unless a statement is executed, we have way of determining if an error existed in that statement. In other words, the statement coverage criterion [RAP85] is based on the observation that an error existing in one part of a program cannot be discovered if the part of the program containing the error and generating the failure is not executed. However, executed a statement once and that too for just one input value and observing that it behaves properly for that input value is no guarantee that it will behave correctly for all inputs. 1.9.2.2 Branch Coverage In branch coverage testing, test cases are designed such that the different branch conditions are given true and false values in turn. It is obvious that branch testing guarantees statement coverage and thus is a stronger testing criterion than the statement coverage testing [RAP85]. 1.9.2.3 Path Coverage The path coverage based testing strategy requires designing test cases such that all linearly independents paths in the program are executed at least once. A linearly independent path is defined in terms of the control flow graph (CFG) of the program. 1.9.2.4 Loop testing Loops are very important constructs for generally all the algorithms. Loop testing is a white box testing technique. It focuses exclusively on the validity of loop constructs. Simple loop, concatenated loop, nested loop, and unstructured loop are four different types of loops [BEI90] as shown in figure 1.6. Simple Loop: The following set of tests should be applied to simple loop where n is the maximum number of allowable passes thru the loop: Skip the loop entirely. Only one pass thru the loop. Two passes thru the loop. M passes thru the loop where m N-1, n, n+1 passes thru the loop. Nested Loop: Beizer [BEI90] approach to the nested loop Start at the innermost loop. Set all other loops to minimum value. Conduct the simple loop test for the innermost loop while holding the outer loops at their minimum iteration parameter value. Work outward, conducting tests for next loop, but keeping all other outer loops at minimum values and other nested loops to typical values. Continue until all loops have been tested. Concatenated loops: These can be tested using the approach of simple loops if each loop is independent of other. However, if the loop counter of loop 1 is used as the initial value for loop 2 then approach of nested loop is to be used. Unstructured loop: This class of loops should be redesigned to reflect the use of the structured programming constructs. 1.9.2.5 McCabes Cyclomatic Complexity The McCabes Cyclomatic Complexity [MCC76] of a program defines the number of independent paths in a program. Given a control flow Graph G of a program, the McCabes Cyclomatic Complexity V(G) can be computed as: V(G)=E-N+2 Where E is the number of edges in the control flow graph and N is the number of nodes of the control flow graph. The cyclomatic complexity value of a program defines the number of independent paths in the basis set of the program and provides a lower bound for the number of test cases that must be conducted to ensure that all statements have been executed at least once. Knowing the number of test cases required does not make it easy to derive the test cases, it only gives an indication of the minimum number of test cases required. The following is the sequences of steps that need to be undertaken for deriving the path coverage based test case of a program. Draw the CFG. Calculate Cyclomatic Complexity V(G). Calculate the basis set of linearly independent paths. Prepare a test case that will force execution of each path in the basis set. 1.9.2.6 Data Flow based Testing The data flow testing method chooses test paths of a program based on the locations of definitions and uses of variables in the program. Various data flow testing approaches have been examined [FRA88] [NTA88] [FRA93]. For data flow testing each statement in program is allocated a unique statement number and that each function does not alter its parameters or global variables. For a statement with S as its statement number, DEF(S) = {X| statement S contains a definition of X} USE(S) = {X| statement S contains a use of X} If statement S is if or loop statement, its DEF set is left empty and its USE set is founded on the condition of statement S. The definition of a variable X at statement S is live at statement S, if there exists a path from statement S to S which does not contain any condition of X. A definition-use chain (or DU chain) of variable X is of the type [X,S,S] where S and S are statement numbers, X is in DEF(S), USE(S), and the definition of X in statement S is live at statement S. One basic data flow testing strategy is that each DU chain be covered at least once. Data flow testing strategies are helpful for choosing test paths of a program including nested if and loop statements 1.9.3 Grey-Box testing technique Grey box testing [BIN99] designs test cases using both responsibility-based (black box) and implementation-based (white box) approaches. To completely test a web application one needs to combine the two approaches, White-box and Black-box testing. It is used for testing of Web based applications. The Gray-box testing approach takes into account all components ma

Friday, October 25, 2019

Hamlet - The Imbalance of the Idealistic Mind and Human Nature Essay

Hamlet - The Imbalance of the Idealistic Mind and Human Nature  Ã‚   It is often heard: Nobody is Perfect. This phrase is often used as a rationalization of foolish human mistakes that could have been prevented.   However, this statement has a much more profound significance. It contains an important lesson that guides or rather should guide people through life.   By admitting that nobody is perfect, the individual demonstrates a deeper understanding of the human nature and inner self. This knowledge is essential to the individual's creation of healthy relationships with one's surrounding.   For as Robert A. Johnson asserts in his book, He, "perfection or a good score is not required; but consciousness is"(76).   In William Shakespeare's play, Hamlet, the main character experiences enormous inner turmoil, for he fails to acknowledge the human tendency for imperfection, or more strongly emphasizing, the human proneness to err.   With his idealistic perception of the world crushed by his father's death and the incestuous remarriage of his glor ified mother, Hamlet unconsciously throws himself into a reality, in which he develops a deep resentment for humanity, and more specifically, for his mother, Queen Gertrude.   His frustrating disorientation and misunderstanding of his situation is not brought upon by the repressed sexual desires gaining control of Hamlet's mind, as Sigmund Freud would have it (119), however, it is, perhaps, the necessity, forcing him to abandon his security, that causes Hamlet to become paralyzed in his "meditation of inward thoughts"(Coleridge 95), thus, precluding his ability to act upon his deepest desire to avenge the wrongs.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   When King Hamlet, Prince Hamlet's father, was still alive, the prince... ... now; if it be not now/ yet it [will] come - the readiness is all. Since no man, of/ aught he leaves, knows what is't to leave betime, let be"(5, II, 202-206), Hamlet demonstrates he's newly found understanding as well as contentment with his self, for he has come to terms with the non-idealistic world and reached "tao, the middle way"(Johnson 38).   Through accepting his new identity as it should be in the context of the whole universe, the prince stopped attempting to find everything its place, but rather he allows for the natural order to occur.   Accordingly, he is able reason and act in harmony with his mind, for he has reached the Grail Castle, the "inner reality, a vision, poetry, a mystical experience, and it can not be found in any outer place"(Johnson 56). Works Cited: Shakespeare, William. Hamlet. Ed. David Bevington. New York: Longman,1997.

Thursday, October 24, 2019

Learning Styles and the Most Preferred Teaching Methodology Among Sophomore Nursing Students

Learning Styles and the Most Preferred Teaching Methodology among Sophomore Nursing Students An Undergraduate Thesis Presented To the Faculty of the Institute of Nursing Far Eastern University In Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Nursing. Submitted By: Fernandez, Marjorie S. Flaga, Arlene M. Flores, Con Adrianne E. Flores, Jethro S. Flores, Kim Sheri L. Flores, Nonnette Adrian L. Floro, Giselle Ann DR. Foronda, Djenina R. Francia, Lovie Jay A. Frany, Lizairie Q. Fulgentes, Ezra M. BSN 114 / Group 56 Submitted to: Mr. Renante Dante Tan RN, MAN September 2010 APPROVAL SHEET The Thesis proposal entitled: Learning Styles and the Most Preferred Teaching Methodologies among Sophomore Nursing Students Prepared and submitted by: Fernandez, Marjorie S. , Flaga, Arlene M. , Flores, Con Adrianne E. , Flores, Jethro S. , Flores, Kim Sheri L. , Flores, Nonnette Adrian L. , Floro, Giselle Ann DR. , Foronda, Djenina R. , Francia, Lovie Jay A. , Frany, Lizairie Q. , Fulgentes, Ezra M. In Partial fulfillment of the requirements for the Degree of Bachelor OF Science in Nursing, this research proposal has been examined and recommended for acceptance and approval for oral examination. Renante Dante G. Tan Research Adviser Approved by the committee in Oral Examination with a grade of Ma. Belinda Buenafe RN, Ph. D. Josefina Florendo RN, MAN Associate Dean Institute of Nursing Esther Salvador RN, MAN Accepted and approved for partial fulfillment of requirements for the degree of Bachelor of Science in Nursing. Glenda S. Arquiza RN, Ph. D. Dean Institute of Nursing ACKNOWLEDGEMENT The researchers of group 56 of Far Eastern University, BSN 114 Batch 2011 would like to extend our deep appreciation and sincerest gratitude to the outstanding people who made the study possible. First and foremost we thank our ever loving God who was our strength during our weakness and our guide when we’re out of sight. To our cooperative respondents and to their respective clinical instructors, we are very grateful for their acceptance to make our research possible. To our parents, that despite our busy schedule at school, we thank them for their immeasurable love, deep understanding and never ending support. We would also like to thank our fellow group mates and friends for understanding and exerting efforts that despite the pressure and conflicts, we remained intact and united in fulfilling this study. We also like to thank Mr. Jay-el Viteno, for consulting us in our statistics that despite his busy schedule was able to make time and guide us in making and understanding our research statistics. To our research adviser, Mr. Renante Dante G. Tan RN, MAN, for sharing with us his precious time and his guidance in helping us to make this research work possible. We would also like to thank him for all the encouragement and for his immeasurable faith and support in this work. To our respective panelists, Josefina Florendo RN,MAN, Esther Salvador RN, MAN and Dr. Ma. Belinda Buenafe of the Institute of Nursing, for letting us spread our wings and believing in us more than we do. With this, we would like to dedicate our finished manuscript to all the people who became part of our journey. ABSTRACT Objective: To determine the learning styles of the sophomore nursing students in Far Eastern University and their most preferred teaching methodology in terms of didactics and in skills. Methods: This study was conducted at Far Eastern University during the period from Noember 2009 to September 2010. The total population of the sophomores were 630, through Sloven’s formula, we were able to come up with 245 students as the sample. The instrument used by the researchers has two parts. The first part was the Kolb’s Learning Style Inventory which was used to determine the learning style of the respondents while the second part which was a self-made instrument validated by three experts was used to determine their most preferred teaching methodology. Results: Majority of the respondents belong to Divergers which has a frequency of 81 out of the 245 respondents, 58 were accomodators, 57 were assimilators and 49 were convergers. Based on the findings, there was a significant difference between the learning styles of the sophomore nursing students. The researchers also came up with the result that all of the four learning styles have a common teaching methodology which was demonstration. In terms of didactics, accomodators and convergers preferred pure lecture/discussion without power point with a percentage of 27. 6 and 32. 7, respectively. Divergers and assimilators preferred pure lecture/discussion with the use of power point with a percentage of 23. 7 and 17. 2, respectively. Conclusion: The results showed that there was a significant relationship between the learning styles of the sophomore nursing students and their most preferred teaching methodology TABLE OF CONTENTS Title i Approval Sheet ii Acknowledgement iii Abstract iv Table of Contents vi List of Tables viii List of Figures ix Chapter IIntroduction Background of the Study 1 Statement of the Problem 2 Significance of the Study 3 Scope and Limitation 5 Chapter IITheoretical Framework Review of Related Literature 6 Research Paradigm 27 Research Hypothesis 28 The Main Variable of the Study 28 Definition of Terms 29 Chapter IIIResearch Methodology Research Design 33Population and Sample 33 Research Locale 34 Research Instruments 35 Validation of the Instruments 36 Data Collection Procedure 36 Statistical Treatment of Data 37 Chapter IVResults and Discussion 40 Chapter VSummary of Findings, Conclusions and Recommendations Summary of Findings 57 Conclusions 58 Recommendations 58 Bibliography 59 Books Journals Website Appendices A. Letter to the Dean 60 B. Kolb’s Learning Style Inventory 61 C. Learning Style Grid 63 D. Population of the Sophomore Nursing students included in the study per section. 64 E. Learning Styles of Sophomore Nursing Students 65 F. Learning Styles of Sophomore Nursing Students Of Sophomore Nursing Students per section in terms of didactics 66 G. Learning Styles and the Preferred Teaching Methodology Of SophomoreNursing Students per section in terms of skills 72 H. Curriculum Vitae 78 List of Tables 1. Frequency Distribution & Percentage of the 6 sections included in the study 34 2. Frequency & Percentage distribution of the 245 respondents as to their different learning styles 41 3. Frequency and Percentage Distribution of preferred teaching methodology of Sophomore Nursing Students as per learning style in terms of Didactics 43 4. Frequency and Percentage Distribution of Preferred Teaching Methodologies of Sophomore Nursing Students as per Learning Style in terms of Skills 46 5. Chi- Square Goodness of fit for the difference of the different learning styles 49 6. Chi Square Test of Independence for the significant relationship of Learning Styles of sophomore nursing students and their most preferred teaching methodologies in terms of didactics50 7. Chi Square Test of Independence for the significant relationship of Learning Styles of sophomore nursing students and their most preferred teaching methodologyin terms of skills52 List of Figures 1. Research Paradigm27 2. Learning Style Grid 63 CHAPTER I INTRODUCTION Background of the Study Students have different approach in learning and these what makes them unique. Thus, understanding on how they learn and helping them to learn is vital in any educational program. This can be especially important for students since they may have different ways on how to learn. Some students prefer to learn by group work while some would prefer learning alone, some also learn while listening to music while some students learn effectively by studying silently. Furthermore, some students learn by engaging into activities yet some learn by observation only. These different learning techniques affect the students especially their coping mechanisms in terms of learning. In high school, teachers tend to spoon-feed their students while in college, professors are different, for the reason that they seldom discuss to the students the entire topic or lesson. Instead, they only discuss the important ideas and would encourage the students to read more and study harder. On the other hand, there are so many teaching methodologies that can be used to enhance effectively a student’s learning capabilities. Some focuses on learning skills while others are more on enhancing knowledge. In Far Eastern University, particularly in the Institute of Nursing, a particular subject is usually divided into different concepts. Each concept is taught by specific professor who has mastery over the concept. These professors use different teaching methodology in educating the students. Some sticks to only one methodology, while others use different teaching methodologies. Some of the most commonly used teaching methodologies are: Lecture w/ powerpoint and role playing in didactics and demonstration in skills. In the study conducted by Hauer, Straub and Wolf (2005), nursing students were identified as having a learning style preference between that of a diverger and an assimilator. According to the study conducted by Elliot (2003) about the preferred teaching methodologies of nursing students, most students preferred case studies and group discussion/activities. The question now rises, with the advent of technological advancement such as CCTV (Closed-Circuit Television) and virtual laboratory, can case study and group discussion still be the most preferred teaching methodology among nursing students? Statement of the problem This study aimed to determine the relationship among learning styles of sophomore nursing students of Far Eastern University and their most preferred teaching methodology. Specifically, it sought to answer the following questions: 1. What is the learning style of Sophomore nursing students? a. Convergers b. Divergers c. Assimilators d. Accommodators 2. What is the most preferred teaching methodology of sophomore nursing students as to their learning styles in terms of: a. Didactics b. Skills 3. Is there a significant difference among the different learning styles of sophomore nursing students? 4. Is there a significant relationship in the learning styles of sophomore nursing students when grouped according to their most preferred teaching methodology in terms of: I. Didactics II. Skills Significance of the study The different learning styles of every student in the present time have a great impact on their academic performance. Everyone use different learning style to improve their own knowledge, some may be good in studying while listening to music, some students like to study alone in the four corners of their room, and some students learn easily if they are within a group study session. The researchers chose this topic because they want to challenge themselves to be more aware on how the students learn and grow in their own way and to know how the students learn and acquire knowledge. Nursing Practice The study would be helpful in guiding students with identifying their learning style as early as their second year in the institute of nursing. Therefore, having a good learning style that would match their most preferred teaching methodology could help them learn more effectively thus enhancing their knowledge. Having more knowledge could enable the students to be more proficient in the nursing practice. Nursing Administration The study provided awareness about the learning styles and the preferred teaching methodologies of sophomore nursing students, thus giving insights about the needs of the students as they learn. This study might also provide ideas on how to improve the education system for the benefit of the students. Nursing Education The result of this study should give nursing educators ideas on what specific teaching methodology to be used based on the students’ learning style. Furthermore, the nursing educator would have an insight as to what teaching methodology is best suited for the student providing a more student-centered teaching method that would aid the student to learn more effectively. Nursing Research The study of different learning styles of the students might contribute to nursing research through developing trustworthy evidence about issues of importance to the nursing profession, including nursing practice, education, administration and informatics. It made the study more significant in present time because through researching, the researchers might prove that the learning style/strategies might have advantage and disadvantage in studying. The importance of research in nursing is that because of broad support for evidenced base nursing practice, research has assumed heightened importance for nurses. Research finding from rigorous studies provide especially strong evidence for informing nurse’s decisions and actions. Nurses are accepting the need to the base specific nursing actions are clinically appropriate, cost effective, and result positive outcomes for clients. Scope and limitation The study focused on determining the relationship between the different learning styles and the most preferred teaching methodology of sophomore nursing students of the Far Eastern University during the first semester of school year 2010. The researcher used a descriptive correlational design. 245 sophomore nursing students were selected through purposive sampling technique. The study was conducted at Far Eastern University on July 17, 2010. The study did not include factors that may affect the result of the study such age and gender of the respondents as well as whether they really wanted to be in the nursing profession. Furthermore, the respondent’s level of stress and grades were also not included in the study. CHAPTER II THEORETICAL FRAMEWORK Learning According to Eric Kandel (2000) â€Å"Learning is the process by which we acquire knowledge about the world. † Learning is the process in which a person consciously takes their self farther away from ignorance. Ignorance is the lack of knowledge, the inability to understand something without guidance from an outside force. Ignorance can also be the willful act of not learning. ( Lindsea 2008 ) Learning Styles The literature basically indicates that there is wide acceptance of the concept of learning styles; however, there is disagreement on how to best measure learning styles (Coffield, et. al. , 2004). While the learning profession has long recognized the need for innovative instructional activities that relate to the diverse learning styles of learners, there is some question as to just how meaningful they are to the learning environment. That is, most researchers agree that people do have various learning styles and preferences, however, research tends to agree that it is relative unimportant as it is far more important to match the presentation with the nature of the subject, such as providing correct learning methods, strategies, and context; than matching individual preferences (Coffield, 2004). Perhaps David Merrill (2000) has the best philosophy for using learning styles — instructional strategies should first be determined on the basis of the type of content to be taught or the goals of the instruction (the content-by-strategy interactions) and secondarily, learner styles and preferences are then used to adjust or fine-tune these fundamental learning strategies. Finally, content-by-strategy interactions take precedence over learning-style-by-strategy interactions regardless of the instructional style or philosophy of the instructional situation. According to Rayner (2001) and Coffield (2001), the idea of a personal style in learning has clearly spread across the globe during the last decade to occupy a prominent place in professional discussion about learning and teaching. This means that the learning style of an individual matters in learning and also it affects teaching. Recent work by Burnett (2005) Cheminais (2002) and Reid (2005) identify that the different styles in learning serve as an important component in inclusive learning and teaching in the classroom. Indeed, Cheminais (2002) suggested that to be an effective and successful teacher, they should: (a) show respect for pupils’ individual learning styles and differences, (b) be responsive to pupils’ different learning styles, (c)use different levels of tasks and activities. Smith (2001) has stated that there are two methods related to grasping experience and these two are the Concrete Experience (CE) and Abstract Conceptualization (AC). In addition, he also suggested two methods in transforming experience and these are Reflective Observation (RO) and Active Experimentation (AE). These four modes are all engaged in the ideal learning process and must be incorporated together to have an effective learning based on Kolb’s Learning Theory. Individuals are likely to develop or use one-grasping experience approach and one experience-transforming approach. The combination of these two preferred approaches is the individual’s learning style (Smith, 2001). These learning styles are the following: converger, diverger, assimilator, and accommodator. Converger Convergers excel in making practical applications of ideas and in using deductive reasoning to solve problems. They use Active experimentation and abstract conceptualization as their approaches in transforming experience and grasping experience, respectively (Smith, 2001). They learn from thinking (Chiya, 2003). Diverger Divergers are characterized by concrete experience (feeling) and reflective observation (watching). They use imagination and see things in different points of view (Smith, 2001). They are learning from feeling (Chiya, 2003). Assimilator If convergers use deductive reasoning, assimilators on the other hand use inductive reasoning in creating theoretical models. They utilized abstract conceptualization and reflective observation as their preferred approaches (Smith, 2001). They learn from watching and listening (Chiya, 2003) Accommodator Accommodators are good in actively engaging with the world and actually doing things rather than merely reading about and studying them. They are characterized by concrete experience (feeling) and active experimentation (doing) (Smith, 2001). They learn from doing (Chiya, 2003). â€Å"The more learning styles learners use as their major learning styles, the more flexible and successful the learners are. If students use limited learning styles as their preference, it is more challenging for them to adjust to teachers’ teaching styles† (Chiya, 2003). An interpretation was amended & revised by Alan Chapman (March 2006), based on Kolb’s Learning Styles which explains that different people naturally prefer a certain single different learning style. Various factors influence a person's preferred style: notably in his experiential learning theory model (ELT) Kolb defined three stages of a person's development, and suggests that the propensity to reconcile and successfully integrate the four different learning styles improves as people mature through their development stages. The development stages that Kolb identified are: (a) Acquisition – birth to adolescence – development of basic abilities and ‘cognitive structures', (b) Specialization – schooling, early work and personal experiences of adulthood – the development of a particular ‘specialized learning style' shaped by ‘social, educational, and organizational socialization', (c) Integration – mid-career through to later life – expression of non-dominant learning style in work and personal life. Whatever influences the choice of style, the learning style preference itself is actually the product of two pairs of variables, or two separate ‘choices' that peole make, which Kolb presented as lines of axis, each with ‘conflicting' modes at either end: Concrete Experience – CE (feeling) —–V—–Abstract Conceptualization – AC (thinking) Active Experimentation – AE (doing) —–V—– Reflective Observation – RO (watching) Felder & Spurlin (2005) try to remedy the potential misuse of learning styles by pointing out that: (a) Learning style dimensions are scales, mild, moderate or extreme tendencies can be exhibited, (b) Learning style profiles are indicative of tendencies and individuals at one time or another will exhibit tendencies of the opposing characteristic, (c) Learning style preferences do not indicate a learner's strengths and weaknesses, only the preferred activity, (d) Learning style preferences may be subject to a learner's educational experience and ‘comfort'. Teaching Methodology Motivating students is a simple matter of rewards, gimmicks, and games. Students respond to teachers who can inspire while they teach. Creativity is essential. (Craft, 2010) According to Chiya (2003), students’ learning can be sometimes hindered by the gap between the students’ learning styles and the teachers’ teaching styles, and also the lack of instruction on learning strategies. Bridging this gap can only be achieved when the professors are aware of their students’ needs, capacities, potentials and most importantly, their learning styles (Rao, 2002). Discussion Lecture based format is the traditional passive way of learning. It involves situations where material is delivered to students. Recent studies show the effectiveness of active learning methods. A comparison of lecture combined with discussion versus active, cooperative learning methods by Morgan, Whorton, & Gunsalus (2000) demonstrated that the use of the lecture combined with discussion resulted in superior retention of material among students. The findings of a study by de Caprariis, Barman, & Magee (2001) suggest that lecture leads to the ability to recall facts, but discussion produces higher level comprehension. Further, research on group-oriented discussion methods has shown that team learning and student-led discussions not only produce favorable student performance outcomes, but also foster greater participation, self confidence and leadership ability (Perkins & Saris, 2001; Yoder & Hochevar, 2005). In considering an adapted practice model, substantial research highlights the usefulness of work-based mentorship and supervision as part of effective training strategies. Studies claim the one-to-one supervisory relationship was the most important element in clinical instruction (Saarikoski and Leino-Kilpi, 2002). Mentorship also facilitates learning opportunities, and supervises and assesses staff in the practice setting. Terminology frequently used to describe a mentor includes teacher, supporter, coach, facilitator, assessor, role model and supervisor (Hughes, 2004; Chow and Suen, 2001). This is supported by models advocating self-directed, evidence-based and problem-based learning. Demonstration According to Rosen, Salas, and Upshaw (2007, p. 6) demonstrations are often conceived of simply as an example of task performance; however, demonstrations are rightfully thought of as experiences where learners are prompted to actively process the informational content of the example and to systematically and reliably acquire targeted KSA’s and transfer them to the work environment. They define demonstration as a strategically crafted, dynamic example of partial or whole task performance or of characteristics of the task environment intended to increase the learner’s performance by illustrating (with modeling, simulation, or any visualization approach) the enactment of knowledge, skills, and attitudes (KSA’s) targeted for skill acquisition. † Demonstrations vary in terms of informational and physical characteristics (e. g. , content, form of presentation). Demonstrations also vary in terms of the activities that the learner engages in prior to, during and after observing the example of task performance. According to Fisher & Frey (2008), students need to be aware of the thinking process of the teacher. Demonstration uses a combination of verbal and visual elements to accomplish a task, skill, or strategy (Fisher & Frey, 2008). The demonstration includes the sequence of steps and the decisions that accompany each step so the next step makes sense. Errors to avoid are also noted to accomplish the task, skill or strategy (Fisher & Frey, 2008). After demonstrating the skill or strategy students can be led to know how and when to use their new skills. They can self-assess and evaluate the approaches they use to connect the learning to the next new skill that they learn. They can  begin to travel on the road to self-directed learning. Teachers who have a demonstrator or personal model teaching style tend to run teacher-centred classes with an emphasis on demonstration and modeling(School of Educators, 2010). This will help the students develops and apply skills and knowledge. According to the School of Educators (2010), a teacher with this type of teaching style might comment: â€Å"I show my students how to properly do a task or work through a problem and then I'll help them master the task or problem solution. It's important that my students can independently solve similar problems by using and adapting demonstrated methods. † This teaching style may help an instructor or a teacher to encourage student participation and adapting their presentation to include various learning styles. Students are expected to take some responsibility for learning what they need to know and for asking for help when they don't understand something. As lectures, they should aim for meaningful learning through active processes, not passive transmission of facts (Michael, 2001). Students have different preferred learning styles, experiences, background knowledge, and interests, therefore, according to Michael (2001) that we must use a variety of teaching strategies to maximize student learning. One such teaching strategy involves the use of interactive classroom demonstration. Student work cooperatively to gain meaningful learning of sometimes difficult neural concepts and at the time have fun with the subject (Michael, J. , 2001). Online Terrell ; Dringus (2000) investigated the effect of learning style on student success in an online learning environment and concluded that institutions offering online education programs should give consideration to the different learning styles of their students. According to Farmer (2006) online learning systems have forced teachers and learners to focus on discussion boards and shared communication spaces rather than on the individuals who are taking part in them. Online discussion is ‘group-centred’. It counters the greatest use of LMS (learning management systems) which is to post content online. It is the primary mode of online interaction for constructivist learning: learning based on interpretation and construction of the world rather than reflecting an external reality (Malinowski et al 2006). ‘Reflection and even dialogue are greatly limited in most campus based classrooms, online learning may in fact have an advantage in supporting collaboration and creating a sense of community. An online learning environment reflects a â€Å"group- centered† interaction pattern versus an â€Å"authority-centered pattern† of a face-to-face environment. ’ (Garrison 2006). Pelz (2004) stated that learning does not occur spontaneously among a group of students, whether the setting is face to face or online. Online discussion requires structure just as in a face-to-face setting. In essence, online discussions provide a vehicle where knowledge is facilitated by participants interacting cooperatively with others (critical thinking), to accomplish shared learning goals (social interdependence) particularly when the learning task focuses upon the solution of real-life problems (constructivist learning) (Williams ; Wache 2005). E-learning will take the form of complete courses, access to content for â€Å"just-in-time† learning, access to components, a la carte courses and services, and the separation of â€Å"courses† to acquire and test knowledge vs. content as an immediate, applicable resource to resolve an immediate, perhaps, one time only problem. Learning is and will continue to be a lifelong process, that could be accessed anywhere at anytime to meet a specific need or want. Hall added that more links to real-time data and research would become readily available. Given the progression of the definitions, then, web-based training, online learning, e-learning, distributed learning, internet-based learning and net-based learning all speak of each other (Hall ; Snider, 2000; Urdan ; Weggen, 2000). Reverting to Halls (2000) contention of e-learning in all-inclusive form, distance learning as planned interactive courses, as the acquisition of knowledge and skills at a distance through various technological mediums would seem to be one of e-learning possible disguises. Interestingly, Urdan ; Weggen (2000) saw e-learning as a subset of distance learning, online learning a subset of e-learning and computer-based learning as a subset of online learning. Given the review of definitions on all these terms ‘subset’ does not appear to be the most likely word to describe the relationship among these words and their forms. The definitions show a great depth of interdependence among themselves. While one person may narrowly define a term, another person could give it the all encompassing power. This communicates that e-learning, if given the all encompassing form, can be the larger circle of which all other terms would be overlapping at different times and extents given their user’s intention. Another rationale for this choice is that â€Å"just-in-time† learning is a major advantage of e-learning but not of distance learning. Distance learning purports planned courses, or planned experiences. E-learning does not only value planned learning but also recognizes the value of the unplanned and the self-directedness of the learner to maximize incidental learning to improve performance. Similar also to e-learning and its related terms is technology-based learning (Urdan ; Weggen 2000). Urdan ; Weggen shared that e-learning covers a wide set of applications and processes, including computer-based learning, web-based learning, virtual classrooms, and digital collaborations. For the purpose of their report, they further customized their definition to the delivery of content via all electronic media, including the Internet, intranets, extranets, satellite broadcast, audio/video tape, interactive TV, and CD-ROM. They warned, however, that e-learning is defined more narrowly than distance learning, which would include text-based learning and courses conducted via written correspondence. Like Hall ; Snider 2000), Urdan ; Weggen (2000) have set apart distance learning and e-learning in their glossaries, making, however, e-learning inclusive and synonymous to all computer-related applications, tools and processes that have been strategically aligned to value-added learning and teaching processes. E-learning is the acquisition and use of knowledge distributed and facilitated primarily by electronic means. This form of learning currently depends on networks and computers but will likely evolve into systems consisting of a variety of channels (e. g. , wireless, satellite), and technologies (e. g. , cellular phones, PDA’s) as they are developed and adopted. E-learning can take the form of courses as well as modules and smaller learning objects. E-learning may incorporate synchronous or asynchronous access and may be distributed geographically with varied limits of time. Group work (Brainstorming) According to the study of White et al (2005), group work was generally a positive experience for pharmacology and IT students. However, there were also 25% of the 126 respondents who responded to the open-ended questions with negative comments. These comments were the need for objective individual marks, avenues to decrease loafers, bias among friends in peer evaluations and concerns with confidentiality and anonymity with peer evaluation. The researchers here concluded that attitudes towards group work are probably negatively affected by group assessment and may be improved to some extent by using peer evaluation. Research proves that group work experience was generally positive for students across the different disciplines. They saw group work as a tool to develop life-long and generic skills in influencing and persuading, negotiating and team-building (Maiden, 2004). According to him, this method – group work promotes the development of the said skills. A research study of Reid et al (2005) showed that some students see group work as an undertaking that must be completed well. On the other hand, others see it as a tool for them that would help them advance their individual and collective knowledge. In additional, the approach that students take to their learning depends on the particular conceptions of the task at hand. According to Petrowski et al (2000), group work and creativity has begun in the 1950s and that until now, it is still debatable whether creativity is within a person, as a product or a process. Oral Recitation(Question and answer) | Questioning students not only allows the teacher to evaluate the level of understanding but also provides for feedback, fine tuning the levels of teaching, dealing with misconceptions early, as well as improving the educational material presented. Perhaps one of the most key thoughts beyond all the information above is very simple. Teaching is learning. To teach is to learn. Good teachers learn and adapt to their students, and expand or refine their teaching material as they learn about themselves as well. According to Jennifer Evans (2010), Oral recitation is the practice of having an entire class â€Å"recite important facts, identifications, definitions, and procedures within the instruction and later when they need to be revisited†. This method proves quite beneficial to students when acted out frequently in the classroom, though the time for each session should be kept rather short, not exceeding two and one-half minutes. Hearing it said aloud by their own mouths results in a higher level of confidence in the subject matter, while also ensuring that they fully understand a topic that requires  critical thinking. By engaging them in the learning process rather than just instructing, students will become far more interested in their education until they’re just itching for more knowledge. Also, the level of seriousness is kept to the maximum when students come to realize that this specific topic is vital enough for the entire class to participate in at once, further ensuring remembrance. This process of learning should not be set aside for the classroom alone, however; students of all ages, from elementary school to college, can use this tool to retain any form of information ranging in levels of difficulty. It’s advisable for students currently in their higher learning stages to just sit in a quiet room by themselves and recite whatever facts or definitions they may need aloud. First, they can start by reading straight out of their notes or textbook, allowing themselves to both see the words on the page while reading them out loud. Then, they can progress to the true test by verbally reciting without their paper. This should be repeated a number of times before the day of the test, allowing themselves weeks of prep time; however, once again it is imperative to not put too much strain on the subject. The more difficult the subject is, the more important it is for  a student to be able to recall it at the tip of a hat. Treating information in a more sophisticated way allows this to happen, as the mind will, too, treat the information will such  a high level of care. This method also incorporates the social time all young people need to truly becoming comfortable in their environment. Bitchener ; Watanabe (2008), the part of the exchange does not reflect what is characteristic of realistic communication (you do not usually correct what other people say when they are talking), the fact that student turned her attention to form in this precise moment has important implications for language learning, for it is an act of noticing a language item and how it should work. It is this aspect that helps us decide what to say (meaning) and how to say it, (form) depending on the situation in which we find ourselves and depending on what was said before by us and the other participants of the conversation. Although this process is mostly and best carried out unconsciously, â€Å"meaningful use of language will necessarily imply the establishment of relevant form-meaning mappings† (van den Branden, 2006). Powerpoint PowerPoint is best used when students are expected to retain complex graphics, animation, and figures. For alphanumeric information (e. g. text and numbers) Powerpoint as well as traditional presentations can be used. According to Shock (2008), if students are expected to retain information and/or concepts that are best conveyed through dialog or verbal explanation,  traditional presentations  appear to be best. This type of information should not be shared verbally in the presence of PowerPoint, because people tend to focus on that what is presented on the slides as opposed to what is verbalized. If students are expected to retain simple graphs and alphanumeric information that is verbalized and displayed visually, either presentation style is acceptable. Educational technologies are most effective when used properly. According to Savoy et al (2009), the ‘‘intelligent use† of educational technologies can be defined with three components (1) How people learn (cognitive component)? 2) How can the learning experience be facilitated (instruction component)? (3) How can technology be used to improve instruction and learning (technology component)? Over the years there has been re search to support the three components as individual entities and collectively as the cognitive theory of multimedia learning. The third component has received much attention as researchers try to evaluate the effectiveness of new educational technologies, particularly PowerPoint. Case Presentation It is now documented that students can learn more effectively when actively involved in the learning process (Bonwell and Eison, 1991; Sivan et al, 2001). The case study approach is one way in which such active learning strategies can be implemented in our institutions. There are a number of definitions for the term case study. For example, Fry et al (1999) describe case studies as complex examples which give an insight into the context of a problem as well as illustrating the main point. Davis and Wilcock defined case studies as student centred activities based on topics that demonstrate theoretical concepts in an applied setting. This definition of a case study covers the variety of different teaching structures used, ranging from short individual case studies to longer group-based activities. According to Onishi (2008) in most clinical teaching settings, case presentation is the most frequently used teaching and learning activity. From an educational viewpoint, the two important roles of case presentations are the presenter's reflective opportunity and the clinician educator's clues to diagnose the presenter. When a presenter prepares for a case presentation, he/she has to organize all the information collected from a patient. The presenter sometimes does not recall what to ask or to examine with relation to pertinent differential diagnoses while seeing a patient, and afterward he/she might note that more information should have been collected. He/she is able to note the processes by reflection-on-action and improve the processes the next time. Such a reflective process is the most important role of case conference for a presenter. According to Shochet, Cayea, Levine and Wright (2007), case presentation is a time-honored tradition in clinical medicine. Expert analysis of patient cases has been the stimulus for significant discovery and advances in clinical medicine. All clinical educators encounter â€Å"memorable cases† in their teaching roles. The case presentation can also be used by educators as a means to more deeply appreciate unique or challenging learner experiences, and by doing so, enhance teaching expertise. Dissemination of these cases may lead to discoveries and advances in the practice of medical education. Closed Circuit Television (CCTV) The advantages of video conferencing by using closed circuit television in educational institutions are well documented. Scholarly literature has indicated that videoconferencing technology reduces time and costs between remote locations, fill gaps in teaching services, increases training productivity, enables meetings that would not be possible due to prohibitive travel costs, and improves access to learning (Martin, 2005; Rose, Furner, Hall, Montgomery, Katsavras, ; Clarke, 2000; Townes-Young ; Ewing, 2005; West, 1999). Role Playing Role playing is a methodology derived from sociodrama that may be used to help students understand the more subtle aspects of literature, social studies, and even some aspects of science or mathematics. Further, it can help them become more interested and involved, not only learning about the material, but learning also to integrate the knowledge in action, by addressing problems, exploring alternatives, and seeking novel and creative solutions. According to Blatner (2008), role playing is the best way to develop the skills of initiative, communication, problem-solving, self-awareness, and working cooperatively in teams, and these are above all–certainly above the learning of mere facts, many if not most of which will be obsolete or irrelevant in a few years–will help these young people e prepared for dealing with the challenges of the Twenty-First Century. According to Pollock et al (2006), learning to participate is an important skill for humanities and social science s students to learn in today’s multi-stakeholder world. The role play method develops a greater understanding of the complexity of professional practice and enables students to develop skills to engage in multi-stakeholder negotiations within the controlled environment of the classroom. Role play in the classroom can be implemented in a number of ways. It can involve online elements as well as face-to-face interactions. The length of the process can also vary according to the aims of the activity. This guide will outline role play techniques found to be most useful for the social science classroom at a tertiary level. Role play in the classroom involves students actively in the learning process by enabling them to act as stakeholders in an imagined or real scenario. It is a technique that complements the traditional lecture and assignment format of tertiary level social science learning. In a role play, the teacher selects a particular event or situation that illuminates key theories or may be of importance to the topic of study. Students are given detailed background readings and assigned stakeholder roles as preparation. The format of interaction between stakeholders can be varied and may depend on time or resources available. The role play is concluded with a debriefing or reflection stage which reinforces the concepts introduced by the role play. Video Presentation Bassili (2006) conducted a study of college freshmen in a psychology course in order to determine whether they preferred face-to-face or streamed-video lecture delivery as a learning aid. He found that a majority of the students preferred the online video lectures. These findings imply that videotaped content, far from being a less effective vehicle for instruction, might actually increase learner motivation and interest in course material. Other articles outline the advantages of taping learner performances and asking students to watch and reflect upon these recordings. For example, some scholars have found that using videos as reflective diaries can promote critical thinking and reflection and thereby enhance learning development. Researchers have found that making reflective videos can benefit both teachers (Barton and Haydn 2006; Gebhard 2005) and students (Triggs and John 2004). Levy and Kennedy (2004) found evidence for this assertion within the specific context of the language learning classroom. They used computer video capture to record students' behavior during their audio conferencing study of Italian as a foreign language. The researchers found that these recordings became an effective tool for assisting students in visualizing and subsequently correcting their errors. Several other articles have discussed the potential impact of using videos in foreign language study. Herron, Cole, and Corrie (2000), for example, offer evidence that showing videos in the classroom allows instructors to expose language learners to authentic cultural information. Moreover, research suggests that Internet-based audiovisual resources can be effective language learning tools. Hanson-Smith (2004) describes the pedagogical benefits of using online videos as in-class learning resources. In addition, she lauds the fact that the Internet is increasing access to professional audiovisual resources that are free, authentic, and suitable for language learning development. Finally, many scholars have noted the benefits of implementing a video production component in language classes. For example, at the college level, Katchen, Morris, and Savova (2005) have explored the possibility of using video production to engage language learners, asking students to produce vocabulary-focused videos. They contend that the benefit of their approach is twofold. First, it allows students to produce videos using grammatical forms and lexical items that are relevant to them, increasing the chance that these forms and terms will be retained. Second, it facilitates the creation of learning resources for future students. Association of learning styles to teaching methodology In the study conducted by Csapo & Hayen (2006), it states that a mismatch between the learning style of faculty and students has been shown to increase the disparitybetween how faculty teaches and how students learn. This mismatch results in an ineffective learningprocess in the classroom. â€Å"The notion that allcognitive skills are identical at the collegiate level orin different training programs smacks of arroganceand elitism by either sanctioning one group's style oflearning while discrediting the styles of others orignoring individual differences altogether â€Å"Teachers did differ in their teaching styles and the results suggest an association between teaching styles and learning styles Based on the study of Chiya (2003), divergers are characterized by concrete experience (feeling) and reflective observation (watching) while assimilators utilized abstract conceptualization (thinking) and reflective observation (watching). It was obvious that divergers and assimilators both learn through reflective observation or through watching. According to Evans (2004), these differences in teaching styles may also have an impact on areas such as classroom arrangements, the organization and assessment of activities, teacher interactions with students and academically approaches, such as the use of questioning (Evans, 2004). Evans (2004) also stated that several teachers of today are looking at how to shift their lessons to meet new education purposes. However, discussions are still more teacher-centered than student-centered in some classrooms meaning, the lessons are still based on the preferences of the teacher rather than the students. The following information are synthesize from different local and foreign related literatures and studies: Learning style is unique in every individual. Learning is the process of acquiring knowledge. As we know, individuals are unique. Each in every one of us is different and so also our learning styles. Learning styles are the approach on how an individual grasp knowledge. There are four types of learning style: Converger, Diverger, Assimilator, and Accomodator. Each type of learning style is different in terms of the way they acquire knowledge. Convergers or Type I learners excel in making practical applications of ideas and in using deductive reasoning to solve problems. They use Active experimentation (doing) and abstract conceptualization (thinking) as their approaches in transforming experience and grasping experience, respectively. They learn from thinking. Divergers or Type II learners are characterized by concrete experience (feeling) and reflective observation (watching). They use imagination and see things in different points of view. They are learning from feeling. Assimilators or Type III learners on the other hand use inductive reasoning in creating theoretical models. They utilized abstract conceptualization (thinking) and reflective observation (watching) as their preferred approaches. They learn from watching and listening. Accomodators or Type IV learners are good in actively engaging with the world and actually doing things rather than merely reading about and studying them. They are characterized by concrete experience (feeling) and active experimentation (doing). They learn from doing. Teaching on the other hand, is the process of giving out information. Teaching is the means of providing knowledge to individuals. Same as learning styles, teaching methodologies are also unique for every teacher or instructor. Most instructors tend to stick with a specific teaching methodology. Teaching methodology has a great impact on the students. In association to learning styles, the preferred teaching methodologies. The review of the literature indicated howimportant understanding learning styles and the role of learning styles in the teaching/learning process was for effective teaching. Research Paradigm (Kolb diagrams updated May 2006) | Most Preferred teaching methodology in terms of: A. Didactics B. Skills | Shown in the figure above was a typical presentation of Kolb’s to continuum: the east-west called the Processing Continuum (how we approach a task), and the north-south axis is called the Perception Continuum (our emotional response, or how we think or feel about it. ). These learning styles are the combination of two lines of axis (continuums) each formed between what Kolb calls dialectically related modes of grasping experience (doing or watching), and transforming experience (feeling or thinking). An individual internally decide whether he/she wish to  do  or  watch, and  at the same time  decide whether to  think  or  feel. The result of these two decisions produces and helps to form their learning style. The individual chooses a way of grasping the experience, which defines his/her approach to it, and chooses a way to transform the experience into something meaningful and usable, which defines the emotional response to the experience. Having knowledge about the learning styles, the appropriate teaching methodology in terms of didactics and skills for a specific learning style can be determined and can be used for effective learning. Research Hypothesis On the basis of the questions proposed in this study the following hypotheses was tested: a. There is no significant difference between the learning styles of sophomore nursing students. b. There is no significant relationship on the different learning style between skills and didactics. The main variables of the study The different learning styles were the independent variable and the dependent variable was the most preferred teaching methodologies of sophomore nursing students in terms of didactics and skills. Definition of terms Conceptual definition: Learning Styles – various approaches or ways of learning Accommodator – a person who is willing to adapt oneself to other people’s convenience. Assimilator – a person who responds to new situations in conformity with what is already available to consciousness. Converger – one who has special ability in answering rational, unimaginative questions. Diverger – one who is capable of thinking imaginatively beyond the ordinary. Teaching Methodology – the types of principles and methods used for instruction. Didactics – teaching method  that follows a consistent scientific approach. Lecture discussion -an informative talk given as before an audience or class and usually prepared beforehand. Recitation – a written matter that is recited from memory. Power point presentation – The presentation is a collection of individual slides that contain information on a topic. Case presentation – refers to the collection and presentation of detailed information about a particular participant or small group, frequently including the accounts of subjects themselves. Brainstorming- Brainstorming is an informal way of generating topics to write about, or points to make about your topic. Students should simply open their minds to whatever pops into them. E-learning – the delivery of a learning, training or education program by electronic means. E-learning involves the use of a computer or electronic device in some way to provide training, educational or learning material. Group work – a ethod, used by professional social workers, of aiding a group or members of a group toward individual adjustment and increased participation in community activity by exploiting the mechanisms of group life. Reporting- to relate or tell about; present. Role playing – refers to the changing of one's behaviour to assume a role, either unconsciously to fill a social role, or consciously to act out an adopted role. Skills – teaching the learned  capacity to carry out pre-determined results often with the minimum outlay of  time,  energy, or both. Demonstration – the act of proving by the syllogistic process, or the proof itself. An exhibition; proof; especially, proof beyond the possibility of doubt; indubitable evidence, to the senses or reason. Video presentation – A video clip is a small section of a larger video presentation. A series of video frames are run in succession to produce a short, animated video. This compilation of video clips results in a video presentation. Operational Definition: Learning Styles – the method by which an individual acquire knowledge. Accommodator – they tend to get information by themselves; they can easily adapt to sudden changes. Assimilator – individuals who learn by thinking through ideas; they need certain evidences before making judgments. Converger – individuals who learn though practical application. Diverger – is an individual who learns through observation; they love to listen and share ideas. Teaching Methodologies – the strategies employed in teaching. Didactics – are teaching methods used in the classroom setting. Lecture discussion – giving information on a group of people or a class usually to educate. Recitation – giving an answer on a given question using what is previously learned. Power point presentation – the presentation of a slide show made up of slides containing information on a topic commonly used in giving information about a concept. Case presentation – a case specific presentation of data and information gathered from an individual or group of people. Brainstorming – a method of sharing ideas by â€Å"throwing† in whatever pops out of their mind about a certain subject matter. E-learning – method of imparting knowledge through the use of modern electronic devices or softwares. Group work – a method of sharing ideas and combining said ideas to form a unified body of information more commonly used by students. Reporting- presenting a detailed but brief information about a subject. Role playing – adopting and acting out the role or personality of someone else. Skills – a method of teaching the ability to develop a procedure repeatedly. Demonstration – a method of imparting knowledge by showing how something is being done. Video presentation – the use of recorded video or a series of video clips to impart knowledge on a certain subject matter. CHAPTER III RESEARCH METHODOLOGY Research design This study used descriptive correlational study as the research design. According to Polit and Beck (2008), â€Å"descriptive research is the second broad class of non-experimental studies and its purpose is to observe, describe and document aspects of a situation as it naturally occurs and sometimes to serve as a starting point for hypothesis generating or theory development. † This study described the learning styles of Sophomore Nursing students and their most preferred teaching methodology. It also determined if learning styles were associated to their most preferred teaching methodology. Population and Sample The respondents of this study were sophomore nursing students of Far Eastern University within the school year 2010 to 2011. The sophomore nursing students had a total population of 630. Using Sloven’s formula, the sample population of 245 was drawn. Table 1. Frequency Distribution and Percentage of the 6 sections included in the study Section| Frequency| Percentage (%) | BSN 313| 36| 14. 7| BSN 302| 48| 19. 6| BSN 304| 34| 13. 9| BSN 303| 48| 19. 6| BSN 305| 37| 15. 1| BSN SB3| 42| 17. 1| Total| 245| 100| The researchers used convenience sampling method in choosing the sections that were included in the study basing on the inclusion and exclusion criteria. Included in the study were sophomore students who were on deck during Mondays, those who were present during the data gathering and those who were willing to cooperate. Those excluded in the study were freshmen, junior and senior nursing students and those from other institutes, sections of sophomore nursing students who were on deck during Tuesdays, Wednesdays, Thursdays, Fridays and Saturdays and those who were absent during the data gathering procedure. All of the invalid questionnaires answered by the respondents were not included in the tallying. Research Locale The study was conducted at Far Eastern University specifically in the Institute of Nursing. It is a Private non-sectarian University, located in Nicanor Reyes Street, Sampaloc Manila which was suited for the respondents. The researchers had chosen Far Eastern University as a research locale because the behavior, experiences and characteristics that the researchers sought to observe fit the students of FEU specifically nursing students. Furthermore, FEU had adequate diversity or mix of students to achieve the research goal. In addition, entrance to the site was possible and access to the respondents can be granted. Research Instruments The instrument had two parts. The first part of the instrument was Kolb’s Learning Style Inventory (LSI) which was a standard questionnaire constructed by Kolb’s (1985. It was a 12 item self-description questionnaire that would determine the learning style of a particular person. After taking Kolb’s Learning Style Inventory and summing up the total number for each learning styles, it gave the difference between Concrete Experience (CE) and Abstract Conceptualization (AC) and the difference between Active Experimentation (AE) and Reflective Observation (RO). After determining the differences of each learning styles, it was plotted on the paradigm to determine the student’s learning styles as Diverger, Converger, Assimilator and Accommodator. The second part of the instrument was a self-made instrument. It was a structured questionnaire in which respondents were asked to choose their most preferred teaching methodology both in acquiring skills and learning from lectures. Validation of instrument The second part of the instrument was validated by three experts: the Associate Dean of the Institute of Nursing along with two other faculty members who had been on the academe for 5 years. The instrument was validated in terms of its face and content validity. A pilot test was conducted to ten (10) nursing students, after which the respondents were excluded from the actual data gathering procedure. They were able to answer all the questions in the instrument which yielded good results. Data Collection Procedure A letter addressed to Dr. Glenda S. Arquiza, Dean of the Institute of Nursing, was forwarded to ask permission to conduct the study. Moreover, the researchers of the study coordinated with the Level II coordinators to acquire the schedule of the selected respondents. The researcher used structured paper and pencil instrument in which the respondents were guided by a topic guide of questions to be asked and rank order questions which the respondent rank target concepts along a continuum, such as most to least. The respondents were asked to answer the Kolb’s Learning Style Inventory. From the sections present during the data gathering, the researchers used conveniece sampling in choosing the included sections for them to come up with the 245 respondents. The inventories were distributed by some of the members of the research team to the selected respondents and were collected right away after they finish answering the inventory. The data collection was conducted last July 17, 2010. All of the instruments which were valid and with complete answers were all included in the study. Statistical Treatment To organize the data collected, statistical tables were presented. This made the presentation of the data systematic and readily understandable. Furthermore, the following statistical formulas were used to analyze the data collected. The Sloven’s Formula was used to determine the number of minimum respondents to utilize. Its formula is as follows: n=N(1+N*e2) Where: n = number of samples N = total population e = margin of error To answer the first and the second problem statement which were â€Å"What is the learning style of sophomore Nursing students† and â€Å"What are the preferred teaching methodology of sophomore nursing students† Frequency & Percentage Distribution was used. Its formula is as follows: Percentage (%) = fnx 100 where: f = number of times the item occurs (frequency) n = total number of items To answer statement of the problem number 2, weighted mean was used to determine the average of the students who preferred a particular teaching methodology in terms of skills and didactics: Its formula is as follows: X=? fxn where: X = mean ? = summation f = number of times the items occur x = value of the item n = total number of items To answer statement of the problem number 3, â€Å"Is there a significant difference between the different learning styles† chi-square goodness of fit test was used. Its for