IA go
-
Upload
kilbz-tingz -
Category
Documents
-
view
212 -
download
0
description
Transcript of IA go
Chapter 1
Introduction
1.1 Background of the Study
The value of good examinations cannot be undermined. Any reputable
institution—public or private, business or educational—requires all entrants to
pass a standard examination (Bermundo, 2007). However, there must also be
an effort to test for more complex levels of understanding, with care taken to
avoid over-sampling items that assess only basic levels of knowledge (Martin,
2011). After instructors have written a set of test items, following the rules,
they do not know if the items will show which students have mastered the
topic of instruction and which have not. The items must be tried out on the
students before the instructor can determine how well each item works
(Jacob–Chase, 2009). And so, these precondition shows that preparing good
examination questions requires a great deal of thinking and preparation on
the part of the an individual doing the task and there must be a match
between what is being taught and what is being assessed.
Thus, item analysis can be a powerful technique available to teaching
professionals and instructors for the guidance and improvement of
instructions that enables instructors to increase their test construction skills,
identify specific areas of course content which need greater emphasis or
clarity, and improve other classroom practices (Shakil, 2008). According to
the Professional Testing Incorporated (2006) that item analysis is an
important phase in the development of an exam program. In this phase
statistical methods are used to identify any test items that are not working
well. If an item is too easy, too difficult, failing to show a difference between
skilled and unskilled examinees, or even scored incorrectly, an item analysis
will reveal it.
In addition item analysis serves to improve items to be used later in
other tests, to eliminate ambiguous or misleading items in a single test
administration, to increase instructors' skills in test construction, and to
identify specific areas of course content which need greater emphasis or
clarity (University of Washington, 2010).And also, item analysis is a method
of gauging the quality of an examination by looking at its constituent parts. It
seeks to give some idea of how well examination has performed relative to its
purposes. The primary purpose of most examination is that of a
measurement tool, for assessing the achievements of the examination
candidates and thus how future learning can be supported and directed. It is
important for an educator to have an understanding of item analysis – its
method, assumptions, uses and limitations in order that examinations can be
assessed and improved. (McAlpine, 2010)
Furthermore, teacher gives test to the learners to determine if the
level of understanding achieved the specific learning task. Learners’
achievements or performance can be determined through assessment of test
results or when the raw scores are evaluated with the use of statistical
techniques to arrive at meaningful results (Calmorin, 2011). In education, the
use of statistics can cover much of the whole process of teaching and
learning. It is relevant to make broad mention of the use of statistics,
particularly in an inquiry-directed or outcomes-based learning (Santos-
Navarro, 2012)
On the other hand, multiple-choice test is regarded as one of the best
test forms in testing outcomes. This test form is most valuable and widely
used in standardized test due to its flexibility and objectivity in terms of
scoring. This test is made up of items which consists of three which or more
plausible options for each item. The choices are multiple so that examinees
may choose only one correct answer or best option for each item (Calmorin,
2011)
In Mindanao State University at Naawan and MSUN- IDS Naawan
Campus, the performance of item analysis is often ignored due to its difficulty
and complexity of the process. Basically, this project details the methods of
item analysis for this purpose, and also considers how they might be used for
the wider function in assessing the learning of every student and overcome
the difficulty or complexity of the task which burdens most teachers.
1.1.1 Narrative Listing of Existing System
The current examination system in performing item analysis is being
done manually. This system requires pen, paper and calculator. The teacher
will have to check the exam paper manually afterwards he/she has to create
a summarized table form score sheet for the tabulation of item analysis
statistics which are item difficulty and item discrimination. And also, it has to
follow a certain formula to get the precise results in which the teacher will
recognize questions that might be poor discriminators and test item difficulty
of student performance and carefully examine the questions that
discriminate well between high and low scoring students to fully understand
the role of the content leads to appropriate results and if possible, the
questions and items for the test should be prepared well in advance of time
for testing, and reviewed and edited before they are used in the test.
The teacher normally prepares a draft of the test. Such a draft is
subjected to item analysis and validation in order to ensure that the final
version of the test would be useful and functional. First, the teacher tries out
the draft test to a group of students of similar characteristics as the intended
test takers (try-out phase), from the try out group, each will be analyzed in
terms of its ability to discriminate between those who know and those who do
not know and also its level of difficulty(item analysis phase). The item
analysis will provide information that will allow the teacher to decide whether
to revise or replace an item (item revision phase). Then finally, the final draft
of the test is subjected to validation if the content is to make use of the test
as a standard test for the particular unit or grading period according to Rosita
L. Navarro, Ph.D. and Rosita G. Santos, Ph.D.
1.1.2 Issues and Problems
In Mindanao State University at Naawan and Integrated Developmental
School Naawan campus the need to item analyze the test questions is often
ignored due to the difficulty and complexity of the process, since it is done
manually it consumes a length of time to have the desired statistical result.
On the other hand, the teachers/instructors needs to calculate the item
analysis statistics hand on hand following its precise formulation and more
often the implication of item analysis on the examination is not possibly
applied and even the reliability of the test due to its complicated computation
and because it is done manually it is unavoidable to commit inaccuracy and
miscalculation of statistical reports for item analysis and reliability testing
due to the number papers to be checked and inaccessibility of students
performance and progress.
1.2 Statement of the Problem
Since the need to item analyze the test questions is often ignored
because of the difficulty and complexity of the process in Mindanao State
University at Naawan and Integrated Developmental School Naawan campus
to know students strengths and weakness by extracting their test results and
have it interpreted by means of is impossibly obtained. And for such
problem, there is no developed software for item analysis on the said campus
that will analyze the answers of the examinees of each item given accurately
and efficiently and to determine item difficulty and the power of
discrimination on a given test.
1.3 Objectives of the Project
This section presents the goals of this project namely: the general and
specific objectives. Specific objectives presented in here will aid the
proponents of this project to attain its ultimate goal.
1.3.1 General Objective
The project aims it to develop an automated item analysis on an
examination basis that would enable the teachers/instructors to determine
the level of difficulty, the power of discrimination, and test reliability of a
multiple choice test items that will generate automated statistical reports by
means of tables and graph.
1.3.2 Specific Objective
a) To gather essential information about the process of performing item analysis
undertaken during the examination.
b) To determine the approaches do the teachers are working on to assess the
learning of the students in examination basis.
c) To analyse the techniques, formulas and algorithms in performing item
analysis addressing unto the specified problem.
d) To design the initial system that suit to the target user and environment.
e) To develop the prototype based on the constructive design.
f) To evaluate the develop system through unit testing, integration testing,
system testing and acceptance testing.
g) To apply the changes and refine the prototype based on the conducted
evaluation.
h) To deploy the system in the examination period.
1.4 Scope and Limitations of the Project
This project focuses on performing item analysis on a multiple-choice
question examination to determine the level of difficulty, the power of
discrimination and reliability of the test. The system presumes that the
teacher/instructor had already validated the test before it entered in the
system. And also, the project is capable to generate a statistical report of
item analysis and test reliability basing on the exam results; it is represented
by graphs and tables for easy look up and for straightforward understanding
on the set of numbers.
The application of the proposed project is accessible through Local
Area Network (LAN) connection specifically located in the school campus that
connects to the three user roles which are Admin, Teacher and Student. Also,
it is an Offline Web Application. It is limited only for about maximum of fifty
(50) item questions; only four (4) options every item and one (1) answer per
item. The performance of Item Analysis will be generated right after the
examination will end.
1.5 Significance of the Project
The developed Examination System that specially performs item
analysis enables instructors/teachers to increase their test construction
skills, identify specific areas of course/subject content which need greater
emphasis or clarity, and improve test effectiveness and quality. This will help
to improve test items and identify unfair or biased items. This will be the
avenue of the local instructors/teachers to embrace the upgrading
technology in this highly intricate world in regard on the knowledge of the
student in class and broadens the horizons of many students as it exposes
students towards innovative teaching and learning as well as the educators.
And also it provides substantial savings of time and energy over conventional
test development, measures the effectiveness of the teaching learning
progress, students’ present level of achievements and one student’s progress
against others through the automated statistical reports of the item analysis.
On the other hand, the develop software system will lessen, if not
eliminate, the said difficulty or complexity of the examination. This project is
a response to this need, for it has developed software for item analysis that
will analyze the answers of the examinees of each item given accurately and
efficiently. It also provides an opportunity to identify and examine common
misconceptions among students about a particular concept in which it enable
the teachers to determine the level of difficulty and the power of
discrimination of the test item. And precisely, this will maximize the time of
the teacher in class preparation. In fact, it is absolutely important for the
teachers to know the performance of the student and even their performance
towards the students in engaging knowledge and learning for a rightful
assessment.
Chapter 2
Review Related of Literature
2.1 Related Literature
This section presents the related literature of the project that is shown below.
2.1.1 Item Analysis
Item analysis is a process which examines student responses to
individual test items (questions) in order to assess the quality of those items
and of the test as a whole (University of Washington, 2010). Item analysis is a
general term that refers to the specific methods used in education to
evaluate test items, typically for the purpose of test construction and revision
(Education, 2009)
An item analysis admits two statistics that can help to analyze the
effectiveness of the test questions. First thing is question difficulty, it is the
percentage of students who selected the correct response and then
the discrimination (item effectiveness) indicates how well the question
separates the students who know the material well from those who don’t.
( Schreyer Institute, 2012)
According to Rosita L. Navarro, Ph.D. and Rosita G. Santos, Ph.D. that
after the test has been administered and scored, it is essential to determine
the effectiveness of the items. This is done by analyzing the learners’
responses to each item. The procedure is known as item Analysis. Item
analysis gives information which are as follows:
1. The difficulty of the item.
2. The discriminating power of the item.
3. The effectiveness of each option.
Information from item analysis can access whether the item is too
easy or too difficult. It determines how well it discriminates between high
and low achievers on the test and tells whether all the options function
well. Item analysis data also help in determining specific technical defect.
Furthermore, it gives information on what improvements the test items
need.
2.1.1.1 Benefits derived from item Analysis (Navarro-Santos, 2012)
1) It provides useful information for class discussion of the test.
2) It provides data which helps students improve their learning.
3) It provides insights and skills that lead to the preparation of
better tests in the future.
2.1.1.2 Steps in a review of an item analysis report (Schreyer Institute for teaching excellence, 2012 )
1) Review the difficulty and discrimination of each question.
2) For each question having low values of discrimination review the
distribution of responses along with the question text to
determine what might be causing a response pattern that
suggests student confusion.
3) If the text of the question is confusing, change the text or remove
the question from the course database. If the question text is not
confusing or faulty, then try to identify the instructional
component that may be leading to student confusion.
4) Carefully examine the questions that discriminate well between
high and low scoring students to fully understand the role that
instructional design played in leading to these results. Ask
yourself what aspects of the instructional process appear to be
most effective. (Schreyer Institute for Teaching Excellence, Tools
of Item Analysis, 2012)
2.1.2 Item Difficulty
The difficulty of an item or item difficulty is defined as the
number of students who are able to answer the item correctly divided
by the total number of students according to Rosita L. Navarro, Ph.D.
and Rosita G. Santos, Ph.D. Thus:
Index of Difficulty = RU+RLT
x 100 (2.1)
Where:
RU = the number in the upper group who answered the item correctly.
RL = the number in the lower group who answered the item correctly.
T = the total number of both upper and lower groups
Table 2.1 Arbitrary rule of Item Difficulty
Range of difficulty
Index
Interpretatio
n
Action
0 – 0.25 Difficult Revise or
Discard
0.26 – 0.75 Right Difficulty Retain
0.76 – above Easy Revise or
Discard
The table 2.1 represents the arbitrary rule of item difficulty, so by
looking at the range of difficulty index which is obtain by the computing the
Index of Difficulty formula (2.1), there is a corresponding Interpretation and
Action.
Very easy questions may not sufficiently challenge the most able
students. However, having a few relatively easy questions in a test
may be important to verify the mastery of some course objectives.
Keep tests balanced in terms of question difficulty.
Very difficult questions, if they form most of a test, may produce
frustration among students. Some very difficult questions are
needed to challenge the best students.
2.1.3 Item Discrimination
According to Rosita L. Navarro, Ph.D. and Rosita G. Santos, Ph.D.
that difficult items tend to discriminate between those who know and
those who do not know the answer. Conversely, easy items cannot
discriminate between these two groups of students. We are therefore
interested in deriving a measure that will tell us whether an item can
discriminate between these two groups of students. Such a measure is
called an index of discrimination.
In estimating the index of discriminating power, consider the
difference of the right response between the upper group (RU) and the
response from lower group (RL) divided by the number of students in
each group (NG. The formula is as follows:
Index Discrimination = RU−RLNG
(2.2)
Where:
RU = the number in the upper group who answered the item correctly
RL = the number in the lower group who answered the item correctly
NG = the number of students in each group
The discriminating power of an item is reported as a decimal fraction;
maximum discriminating power in indicated by an index of 1.00. Maximum
discrimination is usually found at the 50 percent level of difficulty.
The table 2.2 represents the rule of thumb of item difficulty, so by
looking at the range of difficulty index which is obtain by the computing the
Index of Difficulty formula (2.2), there is a corresponding Interpretation and
Action.
Table 2.2 Item Discrimination rule of thumb
If a question is very easy so that nearly all students answered correctly, the
questions discrimination will be near zero. Extremely easy questions
cannot distinguish among students in terms of their performance.
If a question is extremely difficult so that nearly all students answered
incorrectly, the discrimination will be near zero.
The most effective questions will have moderate difficulty and high
discrimination values. The higher the value of discrimination is, the more
effective it is in discriminating between students who perform well on the
test and those that don’t.
Questions having low or negative values of discrimination need to be
reviewed very carefully for confusing language or an incorrect key. If no
confusing language is found then the course design for the topic of the
question needs to be critically reviewed.
Index Range Interpretation Action
-1.0 — -.50 Can discriminate but item is
questionable
Discard
-.55 — 0.45 Non-discriminating Revise
0.46 — 1.0 Discriminating item Include
A high level of student guessing on questions will result in a question
discrimination value near zero.
2.1.4 Distracter Analysis
In a multiple-choice question, the incorrect choices are called
“distracters.” When creating test questions, consider the following:
1) A percentage of students should select each distracter (in lieu of the
correct answer) nor is the distracter not effective.
2) If too great a percentage of students select a particular distracter, there
might be an ambiguity in the wording of the question or the wording of
that particular distracter.
3) A distracter has value if the percentage of students selecting it differed
based on their overall test performance. A good distracter is one that is
selected by few students who were in the top third of the class, but chosen
by many students in the bottom third.
The distractor should be considered an important part of the item.
Nearly 50 years of research shows that there is a relationship between the
distractors students choose and total exam score. The quality of the
distractors influences student performance on an exam item. Although the
correct answer must be truly correct, it is just as important that the
distractors be incorrect. Distractors should appeal to low scorers who have
not mastered the material whereas high scorers should infrequently select
the distractors. Reviewing the options can reveal potential errors of
judgment and inadequate performance of distractors. These poor
distractors can be revised, replaced, or removed. (University of Texas,
2011)
2.1.5 Cronbach’s Alpha
Cronbach’s alpha is the most commonly used measure of
reliability (Allen, 2010) Alpha was developed by Lee Cronbach in
195111 to provide a measure of the internal consistency of a test or
scale; it is expressed as a number between 0 and 1. Internal
consistency describes the extent to which all the items in a test
measure the same concept or construct and hence it is connected to
the inter-relatedness of the items within the test. (Dennick et al.,
2011).
Standardized Cronbach’s Alpha formula, thus:
(2.3)
Where:
k – Number of items or number of indicator
- Average correlation between items or mean inter – indicator
correlation
2.1.6 Reliability and Validity Testing
Reliability is the degree to which an assessment tool produces stable
and consistent results and the extent to which measures learning
consistently. On the other hand, Validity refers to how well a test measures
what it is purposed to measure and the extent by which it measures what it
was designed to measure. (Max, 2011)
Why is it necessary? While reliability is necessary, it alone is not
sufficient. For a test to be reliable, it also needs to be valid. Here are some
examples that will help us to understand the reliability and validity of the
test.
a) If the scale is reliable it tells you the same weight every time you step
on it as long as your weight has not actually changed. However, if the
scale is not working properly, this number may not be your actual
weight. If that is the case, this is an example of a scale that is reliable,
or consistent, but not valid. For the scale to be valid and reliable, not
only does it need to tell you the same weight every time you step on the
scale, but it also has to measure your actual weight.
b) Switching back to testing, the situation is essentially the same. A test
can be reliable, meaning that the test-takers will get the same score no
matter when or where they take it, within reason of course. But that
doesn't mean that it is valid or measuring what it is supposed to
measure. A test can be reliable without being valid. However, a test
cannot be valid unless it is reliable.
2.1.5 Item Statistics
Statistics is a scientific discipline. It is a branch of Mathematics that
“deals with the collection, organization, presentation, computation and
interpretation of data which are the outcomes of learning (Santos et al,
2010).
Item statistics are used to assess the performance of individual test
items on the assumption that the overall quality of a test derives from the
quality of its items (University of Washington, 2009). Item statistics is a way
to get information from data in the test item. It is a tool for creating an
understanding from a set of numbers (Opre, 2010)
Figure 2.1 Statistical process
Here is some sample Item Statics that shows information statistical reports of item analysis. Refer to table 2.4, table 2.5 and table 2.6:
Table 2.3 Item statistics – Sample 1
By referring to table 2.3 it shows the analysis of mean, standard
deviation, difficulty index and discrimination index as well as the weigths,
means , frequencies and distribution of each item in a midterm examination.
Table 2.4 Item Statistics – Sample
Basing on table 2.4 it shows the distinction of the score by the upper
and lower group specifically for item no. 18
Table 2.5 Item Statistics – Sample 3
Referring to table 2.5, it shows the difficulty index with its corresponding remarks and item discrimination remark on its specific item.
There are two general types of statistical analysis:
2.1.4.1 Descriptive statistics
Descriptive statistics uses methods to summarize a collection of
data by describing what was observed using numbers or graphs
(Santos et al, 2009)
2.1.4.2 Inferential statistics
Inferential statistics, also called predictive statistics, uses
methods to draw patterns in the collected data and then make
conclusions, predictions or forecasts about the group or about process
being studied (Santos et al, 2009)
2.1.5 Multiple-Choice Test
The multiple-choice test is regarded as one of the best test forms in
testing outcomes. This test form is most valuable and widely used in
standardized test due to its flexibility and objectivity in terms of scoring. This
test is made up of items which consists of three which or more plausible
options for each item. The choices are multiple so that examinees may
choose only one correct answer or best option for each item (Calmorin, 2011)
2.1.6 Bloom’s Taxonomy
Questions (items) on quizzes and exams can demand different levels of
thinking skills. For example, some questions might be simple memorization
of facts, and others might require the ability to synthesize information from
several sources to select or construct a response. Benjamin Bloom created a
hierarchy of cognitive skills (called Bloom's taxonomy) that is often used to
categorize the levels of cognitive involvement (thinking skills) in educational
settings. The taxonomy provides a good structure to assist teachers in writing
objectives and assessments. It can be divided into two levels -- Level I (the
lower level) contains knowledge, comprehension and application; Level II (the
higher level) includes application, analysis, synthesis, and evaluation refer to
figure 2.7
Figure 2.2 Blooms Taxonomy
Sometimes objective tests (such as multiple choice) are criticized because
the questions emphasize only lower-level thinking skills (such as knowledge and
comprehension). However, it is possible to address higher level thinking skills via
objective assessments by including items that focus on genuine understanding --
"how" and "why" questions. Multiple choice items that involve scenarios, case
studies, and analogies are also effective for requiring students to apply, analyze,
synthesize, and evaluate information. (Classroom Assessment, 2009)
2.2 Related Existing System
In this section will present the existing studies the will relate to the project with its characteristic that is shown below.
2.2.1 “Statistical Product and Service Solutions (SPSS) Item Analysis Software
SPSS has an item analysis that has useful set of procedures available for
teaching profession especially in assessing the student’s performance. SPSS uses
statistical tools for measuring item analysis and an ideal way for teachers to
evaluate their student’s performance specially when we based it on exams scores.
In SPSS the main function of item analysis is for teachers or instructors to enhance
their examinations on how they create it. SPSS is also an ideal tool in accurate item
analysis. In a basic explanation the main function of the SSPS’s item analysis is to
improve teachers or instructors way of creating a test or exam. (SPSS INC, 2008)
SPSS item analysis gives test creators of test developers mostly teachers and
instructors a means of measuring consistency. SPSS provide a measurement of
internal reliability (consistency) of the test item they called it as “Cronbach’s
Alpha”. That means that the higher the correlation among the items, the greater the
alpha. High correlation implies that high (or low) scores on question are associated
with high (or low) scores on the other questions. Alpha can vary from 0 to 1, with e
that the test is perfectly reliable. Furthermore, the computation of Cronbach’ s
Alpha when a particular item is removed from consideration is a good measure of
that item’s contribution to the entire test’s assessment performance. In the SPSS
output below , these and the other statistics are automatically generated using a
procedure called “Realibity” ( in the output, the term scale refers to the collection of
all test items).
Figure 2.3 Output for the SPSS Reliability program
Note the column “Corrected Item-Total Correlation.” This column
displays the corrected point biserial correlation. You can see that question
three seems to correlate well with overall test performance. On the other
hand, students who tend to do poorly on the test overall tend to answer
question eight correctly. This is not a desired outcome. Notice also the
“Alpha” in the lower left-hand corner and the column titled “Alpha if Item
Deleted.” If question eight is deleted from the scale, the Alpha statistic climbs
to .68. With this perspective, question eight needs to be critically examined
and perhaps rewritten. Refer to 2.4
2.2.2 E-Learning System Of Liceo de Cagayan University-Grade School Department
This system study mainly focused on developing an e-learning system that
provides different levels of task performed by the administrator teacher, students,
and parents. And, also system developed to handle and control the delivery of
instructor-led online course. The system can also perform item analysis in which the
Admin has the right to do it . (Tagupa, et.al, 2012)
Figure 2.4 -Learning System Of Liceo de Cagayan University-Grade School Department – Item Analysis Chart
By referring the figure 2.9 it shows the graphical representation of Item
analysis given by legends that corresponds to each color; blue represents the pupil
who got correct answer and red for the pupil who got the wrong answer. This quiz
online will be taken via online on which the administrator can only perform item
analysis.
Figure 2.5 -Learning System Of Liceo de Cagayan University-Grade School Department – Index of Effectiveness of options or distractors.
Basing on figure 2.10 it shows the Index of Effectiveness of options or distracters. It shows the item number with its corresponding key answer. Also display the upper and lower group with its discriminating power results on every option.
Figure 2.11 -Learning System Of Liceo de Cagayan University-Grade School Department – Question Bank
The figure 2.11 shows the item bank of the system, wherein the teacher can perform such feature. It enables the teacher to store the pervious test wherein he/she may be able to use it to another test or may even make some revision basing on the difficulty and discriminating power of the test item.
2.2.2.1 Technology Used:
Front-end: HTML, CSS, Javascript, JQuery, Ajax and PHP
Back-end: MySQL
Programming Environment: Notepad++
Utilized Software: Adobe Photoshop CS3 for web and graphic design
Chapter 3
Methodology
This section discusses the methodology used in making the project as shown in
Figure 3.1 shown below:
Figure 3.1 – Project Methodology Model (Modified Waterfall)
As shown in Figure 3.1, the requirements analysis stage, this is the first
phase. On the other hand, database and architectural Design will be point out in the
design phase. For the development phase wherein the construction and coding of
the project is contrive. Unit and Usability testing will also be discuss for the testing
phase in the making the project. For the deployment of the project is deliberated in
implementation phase. Lastly the maintenance phase is conferring on the
compassed of the project.
3.1 Requirement Analysis
In this phase,
Here are the methods to initiate the development of the proposed system by the researchers.
Define the research problem
To gathered data upon which the system was based. The
researchers conducted series of interviews with teachers and
instructors to know the specific problem they had obtained in
handling the current system that they were using and also to
know the existing system that they’d used.
Data preparation and implement a sampling plan
To determine the requirements that will be covered in
developing the proposed system. This concerned about
establishing what the ideal system has to perform and generate
reports. Meanwhile, it does not determine how the software will
be designed or built.
Develop a design structure
To conceptualize, operationalize and test the measures of the
proposed system. And also, developing and documenting a
database structure that integrates the various measures and
simple graphics analysis to describe the basic features of the
develop system. This will describe what is, what the system
shows.
3.1.2 Conceptual Analysis
The idea of this project is simply to create a quick and easy way in taking the
examination. The project consists of three users: the student, teacher and the
system administrator. The student will answer the examination through electronic
process. The teacher is the one responsible for the creation of the questions as well
as the performance of item analysis. And the system administrator is the one
responsible for the overall examination reports.
Figure 3.2 Hierarchical Framework of the Project
By referring to table 3.2 it shows the supervisions of every module in the
system. First thing is that Admin supervises the teacher level; the teacher
supervises the student level. The develop project has three modules which are the
following:
Admin module which has the right for creating space for new accounts
that can access the entire system, activating and deactivating accounts.
Teacher module: which has it has the right of the performance of item
analysis, making and monitoring the exam and generates statistical
reports.
Student module: is the one who will take the exam.
3.2 Design
Design is very important aspect need to be emphasized in any application
development process. Design determines the success of an end product
3.3 Design
Design is very important aspect need to be emphasized in any application
development process. Design determines the success of an end product. Refer to
figure 3.2 for the Architectural Design of the Project.
3.4 Development
In this phase is recommended for developing code as well as for builds of
hardware and software components. The process of creating interim builds
allows a team to find issues early in the development process, which shortens
the development cycle and lowers the cost of the project. Daily builds are the
practice of assembling all the components working toward the final goal of a
solution. This enables the team to determine earlier rather than later that all
components will work together. This method also allows the team to add
functionality onto a stable build. The idea is to have a shippable product ready at
any point in time. In this way, the stability of the total solution is well understood
and has ample test data prior to being released into production.
Table 3.2 Software Specification
Front End JavaScript, HTML and CSS
Back End PHP
Database Firebird SQL
Programming Environment Notepad++
Operating System Windows XP or Higher
3.5 Testing
It is very important to have decent quality software’s. Having this means the
quality should match many requirements such as keeping the easy use of GUI’s, as
well as containing faults and failures and many more. A lot of effort is required to
keep this quality to a reasonable standard. Testing is one of the most important
parts in quality assurance especially in the development stages. As the
development of the program comes to an end it becomes harder to fix errors, in fact
becomes harder to spot errors. This would mean, perhaps, to check each section
during the development so that the errors can be spotted and fixed before
progressing to the next stages. Testing should be done during the development
stages and if it is not done during the development stages then it is more than likely
that there will be a lot of bugs and errors. Some problems which may not have been
seen during the development stages, without testing at the end, could be
something like a function being used whilst the stack is empty. This could lead to a
system crashing. However, if testing is done this could be spotted before
proceeding to the next stage. (Higgins, 2010)
3.5.1.Unit Testing
This is the software used for verification and validation. It’s a method in
which the programmer can test all the separate code and see if it is viable to use.
This type of testing is based on a small scale and uses small units of the program.
When looking at procedural programming the unit can be any individual function or
a procedure which is written in the same language as the production code. Testing
is a way to increase confidence that the project meets its requirements.
To Ensure that the Project Meets Its Requirements:
Making sure the program does what it's supposed to do
Making sure that to know what it's supposed to do!
Testing ensures that the software solution is among those that meet the
requirements.
3.5.2 Integration testing
In this phase, individual modules are combined and tested as a group. Data
transfer between the modules is tested thoroughly.
Integration testing is a logical extension of unit testing. In its simplest form, two
units that have already been tested are combined into a component and the
interface between them is tested. A component, in this sense, refers to an
integrated aggregate of more than one unit. In a realistic scenario, many units are
combined into components, which are in turn aggregated into even larger parts of
the program. The idea is to test combinations of pieces and eventually expand the
process to test your modules with those of other groups. Eventually all the modules
making up a process are tested together. Beyond that, if the program is composed
of more than one process, they should be tested in pairs rather than all at once.
Integration testing identifies problems that occur when units are combined. By
using a test plan that requires you to test each unit and ensure the viability of each
before combining units, you know that any errors discovered when combining units
are likely related to the interface between units. This method reduces the number
of possibilities to a far simpler level of analysis.
3.5.3 System Testing
System is concerned with the behavior of the system.
3.5.4 Acceptance Testing
3.6 Implementation
To introduce a capstone project to a course this will provide students with an
opportunity to demonstrate their cumulative learning at the end of the
course/project. The activities will be in a project team environment, guided by the
engineering design process and culminating in the communication and
demonstration of a uniquely designed operational product or system.
To lend an air of authenticity to the project, a community sponsor or mentor
will help provide context and guidance to the students. This person, along with
other potential outside industry reps, will participate in the judging of the project
outcomes.
• Helps form the scope of the project
• Helps put the project in context
• Provides student mentoring (schedule permitting)
• Participates in project evaluation
• May provide logistical support
In this phase, the production system is installed, initial user training is
completed, user documentation is delivered, and the post implementation review
meeting is held. When this phase is completed, the application is in steady-state
production. Once the system is in steady-state production, it is reviewed to ensure
that we met all of the goals in the project plan for a satisfactory result. During
this phase you create the documentation and tools the customer uses to make
informed decisions about how to deploy your software securely.
3.7 Maintenance
The maintenance phase involves making changes to hardware, software, and
documentation to support its operational effectiveness. It includes making changes
to improve a system's performance, correct problems, enhance security, or address
user requirements. To ensure modifications do not disrupt operations or degrade a
system's performance or security, organizations should establish appropriate
change management standards and procedures