Call Taxi Full
-
Upload
dinesh-kumar -
Category
Documents
-
view
138 -
download
4
description
Transcript of Call Taxi Full
SYNOPSIS
Call Taxi Automated System (CTAS) provides a complete solution to all your day to day
Call Taxi office running needs helping you streamline you business from booking and dispatch
to invoicing, reporting and driver management tools. It is fast and easy to use, robust and logical.
It will make your booking and dispatch efficient, reliable and accurate. A variety of value added
functionalities has been implemented to empower the booking handling process and provide easy
and powerful management and reporting tools.
This software which allows each call-taker to book jobs, views the activity in the system,
and resolve problems. Our computerized booking handling system offers efficient, easy to
understand and hassle free booking handling by taking the advantage of modern technology but
at the same time keeping the interface simple to understand and use.
CTAS provides support for call taxi office’s management and operators to maintain and
manage telephone booking orders. The system provides a number of value added functionalities
from empowering the booking handling process and driver management to managing the drivers`
records.CTAS has been integrated with the latest and route mapping and vehicle tracking
functionality, along with our own easy paperless booking, easy to use dispatching system,
finding/editing bookings, regular bookings, accurate pricing based on route and vehicle type,
adding/editing plots, calculating driver wages, reports, and many more tools to help.
Using CTAS ensures a strong and smooth internal workflow management. It provides a
complete service package including computerized booking handling system, financial handling
system, staff management system and much more.
Features
Easy access to customer detail.
Automation of regular bookings.
Driver/vehicle record system.
The most reliable and intelligent booking and manual Dispatch system.
1.1 PROJECT OVERVIEW
Modules
Customer Details Maintenance
Driver Detail Maintenance
Driver Attendance Maintenance
Car Location Maintenance
Booking Maintenance
Accounts Maintenance
Report
MODULE DESCRIPTION
Customer Details Maintenance
In this module Customer Details are stored the personal details of Customer, Phone no,
Mail Id, Alternate Number to Contact. If the new customer is booking then details are taken from
customer else existing means phone no or contact no will verified.
Driver Detail Maintenance
In this Module Driver details are stored the personal details relating to the driver and
which shifts he/she drove and when, the meter totals, any incidents, fines, what training has been
done, when not available, savings and tax information.
Accounts Maintenance
Drivers are running a dispatch where he submits invoices to debtors (account holders);
the accounts module makes this task quick and easy. Driver fares paid for by voucher are added
to an invoice which Admin create and print with a statement if required. Easily track outstanding
accounts, payments for invoices and even track owner’s disbursements if you have multiple
owners of your depot.
Driver Attendance Maintenance
This module helps to maintain the Daily Attendance Details of the Driver. Using this module the
drive availability is calculated. Suppose one driver take leave the car is not present in available
list during the booking operation.
Car Location Maintenance
This module is used to verify the location based on the car. Ex. Car no. 001 is set for
Mattuthavani, Thallakulam and Goripalayam, and then Call Taxi owner can change or Delete the
Car Details in the particular Location. It is used to pick up the user from the nearest location of
car availability.
Booking Maintenance
This module is used to Book the car based on the user requirement through the phone call. If new
customer wants the car then information of the customer will be stored. Suppose Existing
customer wants the car, information about the customer fetch from database.
Report
It is the final module of this project. It shows the following reports
Driver attendance
Car Bill
Car Settlement Amount By month Wise
1.3 SYSTEM STUDY
FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system analysis
the feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the available
technical resources. This will lead to high demands on the available technical resources. This
will lead to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.
1.3.1 Existing System
This system has many drawbacks as follows
1. Manual System
2. Hard to Manage
3. Hard to Maintain Reports
4. Time Wastage
5. Many confusion to maintain the Booking Details
6. Report was not good
2.1 Proposed System
This System has much advantage as using this project the advantages as follows:
1. Automated Billing & Report
2. Easy to Maintain
3. Report by Date, month and Year
4. One click Booking
5. Maintain Driver Details
6. Maintain Attendance of Driver
7. Indication with SMS
2.4 SYSTEM SPECIFICATION
2.4.1 HARDWARE REQUIREMENTS
Processor : Dual core
Speed : 2.66 GHZ and above
RAM Memory : 2 GB
Hard disc : 160 GB
Key Board : 104 keys
Mouse : Optical Scroll Mouse
Monitor : 17 inch LG Color Monitor
2.4.2 SOFTWARE REQUIREMENTS:
OPERATING SYSTEM : WINDOWS XP or Higher
IDE : VisualStudio.NET 2005/2008
LANGUAGE : C#.NET
DATABASE : SQL SERVER 2005
SOFTWARE ENVIRONMENT
AN INTRODUCTION TO .NET FRAMEWORK
The .NET Framework is an integral Windows component that supports building and
running the next generation of applications and XML Web services. The .NET Framework is
designed to fulfill the following objectives:
To provide a consistent object-oriented programming environment whether object code is
stored and executed locally, executed locally but Internet-distributed, or executed
remotely.
To provide a code-execution environment that minimizes software deployment and
versioning conflicts.
To provide a code-execution environment that promotes safe execution of code, including
code created by an unknown or semi-trusted third party.
To provide a code-execution environment that eliminates the performance problems of
scripted or interpreted environments.
To make the developer experience consistent across widely varying types of applications,
such as Windows-based applications and Web-based applications.
To build all communication on industry standards to ensure that code based on the .NET
Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and
the .NET Framework class library. The common language runtime is the foundation of the .NET
Framework. You can think of the runtime as an agent that manages code at execution time,
providing core services such as memory management, thread management, and remoting, while
also enforcing strict type safety and other forms of code accuracy that promote security and
robustness. In fact, the concept of code management is a fundamental principle of the runtime.
Code that targets the runtime is known as managed code, while code that does not target the
runtime is known as unmanaged code. The class library, the other main component of the .NET
Framework, is a comprehensive, object-oriented collection of reusable types that you can use to
develop applications ranging from traditional command-line or graphical user interface (GUI)
applications to applications based on the latest innovations provided by ASP.NET, such as Web
Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the common
language runtime into their processes and initiate the execution of managed code, thereby
creating a software environment that can exploit both managed and unmanaged features.
The .NET Framework not only provides several runtime hosts, but also supports the
development of third-party runtime hosts.
For example, ASP.NET hosts the runtime to provide a scalable, server-side environment
for managed code. ASP.NET works directly with the runtime to enable ASP.NET applications
and XML Web services, both of which are discussed later in this topic.
Internet Explorer is an example of an unmanaged application that hosts the runtime (in
the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to
embed managed components or Windows Forms controls in HTML documents. Hosting the
runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls)
possible, but with significant improvements that only managed code can offer, such as semi-
trusted execution and isolated file storage.
The following illustration shows the relationship of the common language runtime and
the class library to your applications and to the overall system. The illustration also shows how
managed code operates within a larger architecture.
.NET FRAMEWORK IN CONTEXT
The following sections describe the main components and features of the .NET
Framework in greater detail.
FEATURES OF THE COMMON LANGUAGE RUNTIME
The common language runtime manages memory, thread execution, code execution, code
safety verification, compilation, and other system services. These features are intrinsic to the
managed code that runs on the common language runtime.
With regards to security, managed components are awarded varying degrees of trust,
depending on a number of factors that include their origin (such as the Internet, enterprise
network, or local computer). This means that a managed component might or might not be able
to perform file-access operations, registry-access operations, or other sensitive functions, even if
it is being used in the same active application.
The runtime enforces code access security. For example, users can trust that an
executable embedded in a Web page can play an animation on screen or sing a song, but cannot
access their personal data, file system, or network. The security features of the runtime thus
enable legitimate Internet-deployed software to be exceptionally feature rich.
The runtime also enforces code robustness by implementing a strict type-and-code-
verification infrastructure called the common type system (CTS). The CTS ensures that all
managed code is self-describing. The various Microsoft and third-party language compilers
generate managed code that conforms to the CTS. This means that managed code can consume
other managed types and instances, while strictly enforcing type fidelity and type safety.
In addition, the managed environment of the runtime eliminates many common software
issues. For example, the runtime automatically handles object layout and manages references to
objects, releasing them when they are no longer being used. This automatic memory
management resolves the two most common application errors, memory leaks and invalid
memory references.
The runtime also accelerates developer productivity. For example, programmers can
write applications in their development language of choice, yet take full advantage of the
runtime, the class library, and components written in other languages by other developers. Any
compiler vendor who chooses to target the runtime can do so. Language compilers that target the
.NET Framework make the features of the .NET Framework available to existing code written in
that language, greatly easing the migration process for existing applications.
While the runtime is designed for the software of the future, it also supports software of
today and yesterday. Interoperability between managed and unmanaged code enables developers
to continue to use necessary COM components and DLLs.
The runtime is designed to enhance performance. Although the common language
runtime provides many standard runtime services, managed code is never interpreted. A feature
called just-in-time (JIT) compiling enables all managed code to run in the native machine
language of the system on which it is executing. Meanwhile, the memory manager removes the
possibilities of fragmented memory and increases memory locality-of-reference to further
increase performance.
Finally, the runtime can be hosted by high-performance, server-side applications, such as
Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables
you to use managed code to write your business logic, while still enjoying the superior
performance of the industry's best enterprise servers that support runtime hosting.
.NET FRAMEWORK CLASS LIBRARY
The .NET Framework class library is a collection of reusable types that tightly integrate
with the common language runtime. The class library is object oriented, providing types from
which your own managed code can derive functionality. This not only makes the .NET
Framework types easy to use, but also reduces the time associated with learning new features of
the .NET Framework. In addition, third-party components can integrate seamlessly with classes
in the .NET Framework.
For example, the .NET Framework collection classes implement a set of interfaces that
you can use to develop your own collection classes. Your collection classes will blend
seamlessly with the classes in the .NET Framework.
As you would expect from an object-oriented class library, the .NET Framework types
enable you to accomplish a range of common programming tasks, including tasks such as string
management, data collection, database connectivity, and file access. In addition to these common
tasks, the class library includes types that support a variety of specialized development scenarios.
For example, you can use the .NET Framework to develop the following types of applications
and services:
Console applications.
Windows GUI applications (Windows Forms).
ASP.NET applications.
XML Web services.
Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types that
vastly simplify Windows GUI development. If you write an ASP.NET Web Form application,
you can use the Web Forms classes.
CLIENT APPLICATION DEVELOPMENT
Client applications are the closest to a traditional style of application in Windows-based
programming. These are the types of applications that display windows or forms on the desktop,
enabling a user to perform a task. Client applications include applications such as word
processors and spreadsheets, as well as custom business applications such as data-entry tools,
reporting tools, and so on. Client applications usually employ windows, menus, buttons, and
other GUI elements, and they likely access local resources such as the file system and peripherals
such as printers.
Another kind of client application is the traditional ActiveX control (now replaced by the
managed Windows Forms control) deployed over the Internet as a Web page. This application is
much like other client applications: it is executed natively, has access to local resources, and
includes graphical elements.
In the past, developers created such applications using C/C++ in conjunction with the
Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects of
these existing products into a single, consistent development environment that drastically
simplifies the development of client applications.
The Windows Forms classes contained in the .NET Framework are designed to be used
for GUI development. You can easily create command windows, buttons, menus, toolbars, and
other screen elements with the flexibility necessary to accommodate shifting business needs.
For example, the .NET Framework provides simple properties to adjust visual attributes
associated with forms. In some cases the underlying operating system does not support changing
these attributes directly, and in these cases the .NET Framework automatically recreates the
forms. This is one of many ways in which the .NET Framework integrates the developer
interface, making coding simpler and more consistent.
Visual Basic .NET (VB.NET)
Vb.net is an object-oriented computer language that can be viewed as an evolution of
Microsoft's Visual Basic (VB) implemented on the Microsoft .NET framework. Its introduction
has been controversial, as significant changes were made that broke backward compatibility with
older versions and caused a rift within the developer community.
The great majority of VB.NET developers use Visual Studio .NET as their integrated
development environment (IDE). Sharp Develop provides an open-source alternative IDE. Like
all .NET languages, programs written in VB.NET require the .NET framework to execute.
Versions of Visual Basic .NET
As of November 2007, there are four versions of Visual Basic .NET that were
implemented by Visual Basic Team
Visual Basic .NET (VB 7)
The original Visual Basic .NET was released alongside Visual C# and ASP.NET in 2002.
C# — widely touted as Microsoft's answer to Java — received the lion's share of media attention,
while VB.NET (sometimes known as VB7) was not widely covered. As a result, few outside the
Visual Basic community paid much attention to it.
Those who did try the first version found a powerful but very different language under the hood,
with disadvantages in some areas, including a runtime that was ten times as large to package as
the VB6 runtime and an increased memory footprint.
Visual Basic .NET 2003 (VB 7.1)
Visual Basic .NET 2003 was released with version 1.1 of the .NET Framework. New
features included support for the .NET Compact Framework and a better VB upgrade wizard.
Improvements were also made to the performance and reliability of the .NET IDE (particularly
the background compiler) and runtime.
In addition, Visual Basic .NET 2003 was also available in the Visual Studio .NET 2003
Academic Edition (VS03AE). VS03AE is distributed to a certain number of scholars from each
country for free.
Visual Basic 2005 (VB 8.0)
Visual Basic 2005 is the next iteration of Visual Basic .NET, Microsoft having decided to
drop the .NET portion of the title.
For this release, Microsoft added many features, including:
Edit and Continue - probably the biggest "missing feature" from Visual Basic .NET,
allowing the modification of code and immediate resumption of execution
Design-time expression evaluation
The My pseudo-namespace (overview, details), which provides:
o easy access to certain areas of the .NET Framework that otherwise require
significant code to access
o dynamically-generated classes
Improvements to the VB-to-VB.NET converter
The Using keyword, simplifying the use of objects that require the Dispose pattern to free
resources
Just My Code, which hides boilerplate code written by the Visual Studio .NET IDE
Data Source binding, easing database client/server development
The above functions are intended to reinforce Visual Basic .NET`'s focus as a rapid
application development platform and further differentiate it from C#.
Visual Basic 2005 introduced features meant to fill in the gaps between itself and other "more
powerful" .NET languages, adding:
.NET 2.0 languages features such as:
o generics
o Partial classes, a method of defining some parts of a class in one file and then
adding more definitions later; particularly useful for integrating user code with
auto-generated code
o Nullable Types
XML comments that can be processed by tools like NDoc to produce "automatic"
documentation
Operator overloading
Support for unsigned integer data types commonly used in other languages
Visual Basic 2008 (VB 9.0)
Visual Basic 9.0 was released together with the Microsoft .NET Framework 3.5 on
November 19, 2007.
For this release, Microsoft added many features, including:
A true ternary operator If(Boolean, value, value) to replace the IIF function.
Support for LINQ
Lambda expressions
XML Literals
Type Inference
ADO.NET (ActiveX Data objects)
ActiveX data objects (ADO)enables you to write a client application to access and
manipulate data in a data source through a provider .ADO is ideally suite to consume data
exposed OLEDB providers, such as those written with the Microsoft OLE DB simple provider
Tool kit. ADO’s primary benefits are ease of use, high speed, low memory overhead, and a small
disk footprint.
By using the toolkit with ADO, you build a foundation for implementing flexible data
access strategies at higher level .for example ,you can combine ADO’ s ease of application
programmability with the simple provider’s toolkit
Ease of developing provider’s to quickly build end to end ,single ,or multi-tiered applications
that address your corporate ,intranet, internet or enterprise wide data access needs .
.
3.1 DESIGN NOTATION
3.1.1 DATAFLOW DESIGN
The data flow diagram is a way of expressing system requirements in a graphical
form. This led to the modular design. A data flow diagram also known as a "Bubble Chart" has
the purpose of clarifying system requirements and identifying major transformations that will
become program in system design.
A DATA FLOW DIAGRAM (DFD) or a BUBBLE CHART describes the flow of
data and processes that change, or transform, data throughout the system. This network is
constructed by using a set of symbols that do not imply a physical implementation .It is a
graphical tool for structured analysis of the system requirements. DFD models a system by using
external entities from which data flows to a process, which transforms the data and creates,
output-data-flows which go to other processes or external entities or files. Data in files many also
flow to processes as inputs.
DFD's can be hierarchically organized, which help in partitioning and analyzing large
systems. As a first step, one Data Flow Diagram can depict an entire system. Which gives the
system overview. It is called Context Diagram of level 0 DFD. The Context Diagram can be
further expanded. The successive expansion of a DFD from the context diagram to those giving
more details is known as leveling of DFD. Thus a top down approach is used, starting with an
overview and then working out the details.
The main merit of DFD is that it can provide an overview of system requirements,
what data a system would process, what transformation of data is done, what files are used, and
where the results flow.
3.3.1 BASIC DFD SYMBOLS
-----------► Data flow is a route, which enables data to travel from one point
to another. Data may flow from a source to a data store or process. An arrow line
depicts the flow, with arrowhead pointing in the direction of the flow.
A process represents transformation where incoming data flows are changed into
outgoing data flows.
A data store is a repository of data that is to be stored for use by a one or more process may be as
simple as buffer or queue or sophisticated as relational File, they should have clear names. If a
process merely uses the content of store and does not alter it, the arrowhead goes only from the
store to the process. If a process alters the details in the store then a double -headed arrow is
used.
A source or sink is a person or part of an organization, which enters or receives information from
the system, considered to be outside the contest of data flow model.
Customer Details:
Main Window
Customer Details
Processing
Login:
Driver Details:
Data Base User Interface
Check
Main Kindom
Main Window
Customer Details
Processing
Data Base
Account Details:
Booking Details:
Main window
Customer Details
Processing
Data Base
Main Window
Customer Details
Processing
Data Base
Over All DFD:
Main Window
Customer
Driver
Account
Booking
3.3 DESIGN PROCESS
3.3.1 Input Design
A source document differs from a turnaround document in that the former contains data
that change the status of a resource while the latter is a machine readable document.
Transaction throughput is the number of error-free transactions entered during a specified
time period.
A document should be concise because longer documents contain more data and so take
longer to enter and have a greater chance of data entry errors.
Numeric coding substitutes numbers for character data (e.g., 1=male, 2=female);
mnemonic coding represents data in a form that is easier for the user to understand and
remember. (e.g., M=male, F=female).
The more quickly an error is detected, the closer the error is to the person who generated
it and so the error is more easily corrected.
An example of an illogical combination in a payroll system would be an option to
eliminate federal tax withholding.
An error suspense record would include the following fields: data entry operator
identification, transaction entry date, transaction entry time, transaction type, transaction
image, fields in error, error codes, date transaction reentered successfully.
A data input specification is a detailed description of the individual fields (data elements)
on an input document together with their characteristics (i.e., type and length).
3.3.2 Output Design
These guidelines apply for the most part to both paper and screen outputs. Output design is often
discussed before other aspects of design because, from the client's point of view, the output is the
system. Output is what the client is buying when he or she pays for a development project.
Inputs, databases, and processes exist to provide output.
Problems often associated with business information output are information delay,
information (data) overload, paper domination, excessive distribution, and nontailoring.
Mainframe printers: high volume, high speed, located in the data center Remote site
printers: medium speed, close to end user.
COM is Computer Output Microfilm. It is more compact than traditional output and may
be produced as fast as non-impact printer output.
Turnaround documents reduce the cost of internal information processing by reducing
both data entry and associated errors.
Periodic reports have set frequencies such as daily or weekly; ad hoc reports are produced
at irregular intervals.
Detail and summary reports differ in the the former support day-to-day operation of the
business while the latter include statistics and ratios used by managers to assess the health
of operations.
Page breaks and control breaks allow for summary totals on key fields.
Report requirements documents contain general report information and field
specifications; print layout sheets present a picture of what the report will actually look
like.
Page decoupling is the separation of pages into cohesive groups.
Two ways to design output for strategic purposes are (1) make it compatible with
processes outside the immediate scope of the system, and (2) turn action documents into
turnaround documents.
People often receive reports they do not need because the number of reports received is
perceived as a measure of power.
Fields on a report should be selected carefully to provide uncluttered reports, facilitate
80-column remote printing, and reduce information (data) overload.
The types of fields which should be considered for business output are: key fields for
access to information, fields for control breaks, fields that change, and exception fields.
Output may be designed to aid future change by stressing unstructured reports, defining
field size for future growth, making field constants into variables, and leaving room on
summary reports for added ratios and statistics.
Output can now be more easily tailored to the needs of individual users because inquiry-
based systems allow users themselves to create ad hoc reports.
An output intermediary can restrict access to key information and prevent unauthorized
access.
An information clearinghouse (or information center) is a service center that provides
consultation, assistance, and documentation to encourage end-user development and use
of applications.
The specifications needed to describe the output of a system are: data flow diagrams, data
flow specifications, data structure specifications, and data element specifications.
3.3.3 Database design
Database design is the process of producing a detailed data model of a database. This
logical data model contains all the needed logical and physical design choices and physical
storage parameters needed to generate a design in a Data Definition Language, which can then be
used to create a database. A fully attributed data model contains detailed attributes for each
entity.
The term database design can be used to describe many different parts of the design of an overall
database system. Principally, and most correctly, it can be thought of as the logical design of the
base data structures used to store the data. In the relational model these are the tables and views.
In an object database the entities and relationships map directly to object classes and named
relationships. However, the term database design could also be used to apply to the overall
process of designing, not just the base data structures, but also the forms and queries used as part
of the overall database application within the database management system (DBMS).
The process of doing database design generally consists of a number of steps which will be
carried out by the database designer. Usually, the designer must:
Determine the relationships between the different data elements.
Superimpose a logical structure upon the data on the basis of these relationships.
Process Design
"Process design" (in contrast to "design process" mentioned refers to the planning of routine
steps of a process aside from the expected result. Processes are treated as a product of design, not
the method of design. The term originated with the industrial designing of chemical processes.
With the increasing complexities of the information age, consultants and executives have found
the term useful to describe the design of business processes as well as manufacturing processes.
MICROSOFT SQL SERVER
Microsoft SQL Server is a relational database management system developed by
Microsoft. As a database, it is a software product whose primary function is to store and retrieve
data as requested by other software applications, be it those on the same computer or those
running on another computer across a network (including the Internet). There are at least a dozen
different editions of Microsoft SQL Server aimed at different audiences and for different
workloads (ranging from small applications that store and retrieve data on the same computer, to
millions of users and computers that access huge amounts of data from the Internet at the same
time). Its primary query languages are T-SQL and ANSI SQL.
Prior to version 7.0 the code base for MS SQL Server was sold by Sybase SQL Server to
Microsoft, and was Microsoft's entry to the enterprise-level database market, competing against
Oracle, IBM, and, later, Sybase. Microsoft, Sybase and Ashton-Tate originally teamed up to
create and market the first version named SQL Server 1.0 for OS/2 (about 1989) which was
essentially the same as Sybase SQL Server 3.0 on Unix, VMS, etc. Microsoft SQL Server 4.2
was shipped around 1992 (available bundled with IBM OS/2 version 1.3). Later Microsoft SQL
Server 4.21 for Windows NT was released at the same time as Windows NT 3.1. Microsoft SQL
Server v6.0 was the first version designed for NT, and did not include any direction from Sybase.
About the time Windows NT was released, Sybase and Microsoft parted ways and each pursued
its own design and marketing schemes. Microsoft negotiated exclusive rights to all versions of
SQL Server written for Microsoft operating systems. Later, Sybase changed the name of its
product to Adaptive Server Enterprise to avoid confusion with Microsoft SQL Server. Until
1994, Microsoft's SQL Server carried three Sybase copyright notices as an indication of its
origin.
SQL Server 7.0 and SQL Server 2000 included modifications and extensions to the Sybase code
base, adding support for the IA-64 architecture. By SQL Server 2005 the legacy Sybase code had
been completely rewritten.
In the ten years since release of Microsoft's previous SQL Server product (SQL Server 2000),
advancements have been made in performance, the client IDE tools, and several complementary
systems that are packaged with SQL Server 2005. These include: an ETL tool (SQL Server
Integration Services or SSIS), a Reporting Server, an OLAP and data mining server (Analysis
Services), and several messaging technologies, specifically Service Broker and Notification
Services.
SQL Server 2005
SQL Server 2005 (formerly codenamed "Yukon") was released in October 2005. It
included native support for managing XML data, in addition to relational data. For this purpose,
it defined an xml data type that could be used either as a data type in database columns or as
literals in queries. XML columns can be associated with XSD schemas; XML data being stored
is verified against the schema. XML is converted to an internal binary data type before being
stored in the database. Specialized indexing methods were made available for XML data. XML
data is queried using XQuery; SQL Server 2005 added some extensions to the T-SQL language
to allow embedding XQuery queries in T-SQL. In addition, it also defines a new extension to
XQuery, called XML DML, that allows query-based modifications to XML data. SQL Server
2005 also allows a database server to be exposed over web services using Tabular Data Stream
(TDS) packets encapsulated within SOAP (protocol) requests. When the data is accessed over
web services, results are returned as XML
Common Language Runtime (CLR) integration was introduced with this version,
enabling one to write SQL code as Managed Code by the CLR. For relational data, T-SQL has
been augmented with error handling features (try/catch) and support for recursive queries with
CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new
indexing algorithms, syntax and better error recovery systems. Data pages are check summed for
better error resiliency, and optimistic concurrency support has been added for better
performance. Permissions and access control have been made more granular and the query
processor handles concurrent execution of queries in a more efficient way. Partitions on tables
and indexes are supported natively, so scaling out a database onto a cluster is easier. SQL CLR
was introduced with SQL Server 2005 to let it integrate with the .NET Framework.
SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of
allowing usage of database connections for multiple purposes.
SQL Server 2005 introduced DMVs (Dynamic Management Views), which are
specialized views and functions that return server state information that can be used to monitor
the health of a server instance, diagnose problems, and tune performance.
Service Pack 1 (SP1) of SQL Server 2005 introduced Database Mirroring,[Note 1][9] a
high availability option that provides redundancy and failover capabilities at the database level.
Failover can be performed manually or can be configured for automatic failover. Automatic
failover requires a witness partner and an operating mode of synchronous (also known as high-
safety or full safety).
SQL Server 2008
SQL Server 2008 (formerly codenamed "Katmai") was released on August 6, 2008 and
aims to make data management self-tuning, self organizing, and self maintaining with the
development of SQL Server Always On technologies, to provide near-zero downtime. SQL
Server 2008 also includes support for structured and semi-structured data, including digital
media formats for pictures, audio, video and other multimedia data. In current versions, such
multimedia data can be stored as BLOBs (binary large objects), but they are generic bitstreams.
Intrinsic awareness of multimedia data will allow specialized functions to be performed on them.
According to Paul Flessner, senior Vice President, Server Applications, Microsoft Corp., SQL
Server 2008 can be a data storage backend for different varieties of data: XML, email,
time/calendar, file, document, spatial, etc as well as perform search, query, analysis, sharing, and
synchronization across all data types.
Other new data types include specialized date and time types and a Spatial data type for
location-dependent data Better support for unstructured and semi-structured data is provided
using the new FILESTREAM data type, which can be used to reference any file stored on the file
system. Structured data and metadata about the file is stored in SQL Server database, whereas
the unstructured component is stored in the file system. Such files can be accessed both via
Win32 file handling APIs as well as via SQL Server using T-SQL; doing the latter accesses the
file data as a BLOB. Backing up and restoring the database backs up or restores the referenced
files as well.SQL Server 2008 also natively supports hierarchical data, and includes T-SQL
constructs to directly deal with them, without using recursive queries.
The Full-text search functionality has been integrated with the database engine.
According to a Microsoft technical article, this simplifies management and improves
performance.
Spatial data will be stored in two types. A "Flat Earth" (GEOMETRY or planar) data type
represents geospatial data which has been projected from its native, spherical, coordinate system
into a plane. A "Round Earth" data type (GEOGRAPHY) uses an ellipsoidal model in which the
Earth is defined as a single continuous entity which does not suffer from the singularities such as
the international dateline, poles, or map projection zone "edges". Approximately 70 methods are
available to represent spatial operations for the Open Geospatial Consortium Simple Features for
SQL, Version 1.1.
SQL Server includes better compression features, which also helps in improving
scalability. It enhanced the indexing algorithms and introduced the notion of filtered indexes. It
also includes Resource Governor that allows reserving resources for certain users or workflows.
It also includes capabilities for transparent encryption of data (TDE) as well as compression of
backups. SQL Server 2008 supports the ADO.NET Entity Framework and the reporting tools,
replication, and data definition will be built around the Entity Data Model. SQL Server
Reporting Services will gain charting capabilities from the integration of the data visualization
products from Dundas Data Visualization, Inc., which was acquired by Microsoft. On the
management side, SQL Server 2008 includes the Declarative Management Framework which
allows configuring policies and constraints, on the entire database or certain tables, declaratively.
The version of SQL Server Management Studio included with SQL Server 2008 supports
IntelliSense for SQL queries against a SQL Server 2008 Database Engine. SQL Server 2008 also
makes the databases available via Windows PowerShell providers and management functionality
available as Cmdlets, so that the server and all the running instances can be managed from
Windows PowerShell.
SQL Server 2008 R2
SQL Server 2008 R2 (10.50.1600.1, formerly codenamed "Kilimanjaro") was announced
at TechEd 2009, and was released to manufacturing on April 21, 2010. SQL Server 2008 R2
adds certain features to SQL Server 2008 including a master data management system branded as
Master Data Services, a central management of master data entities and hierarchies. Also Multi
Server Management, a centralized console to manage multiple SQL Server 2008 instances and
services including relational databases, Reporting Services, Analysis Services & Integration
Services.
SQL Server 2008 R2 includes a number of new services, including PowerPivot for Excel
and SharePoint, Master Data Services, StreamInsight, Report Builder 3.0, Reporting Services
Add-in for SharePoint, a Data-tier function in Visual Studio that enables packaging of tiered
databases as part of an application, and a SQL Server Utility named UC (Utility Control Point),
part of AMSM (Application and Multi-Server Management) that is used to manage multiple
SQL Servers.
4.1 DATABASE CREATION & USER CREATION
1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then
expand that instance.
2. Right-click Databases and then click New Database.
3. In New Database, enter a database name.
4. To create the database by accepting all default values, click OK; otherwise, continue with
the following optional steps.
5. To change the owner name, click (…) to select another owner.
6. To change the default values of the primary data and transaction log files, in the Database
files grid, click the appropriate cell and enter the new value. For more information, see
Add Data or Log Files to a Database.
7. To change the collation of the database, select the Options page, and then select a
collation from the list.
8. To change the recovery model, select the Options page and select a recovery model from
the list.
9. To change database options, select the Options page, and then modify the database
options. For a description of each option, see ALTER DATABASE SET Options
(Transact-SQL).
10. To add a new file group, click the File groups page. Click Add and then enter the values
for the file group.
11. To add an extended property to the database, select the Extended Properties page.
12. In the Name column, enter a name for the extended property.
13. In the Value column, enter the extended property text. For example, enter one or more
statements that describe the database.
14. To create the database, click OK.
4.2 IMPLEMENTATION OF SQL VIEW
Before you create a view, consider the following guidelines:
You can create views only in the current database. However, the tables and views
referenced by the new view can exist in other databases or even other servers if the view
is defined using distributed queries.
View names must follow the rules for identifiers and must be unique for each schema.
Additionally, the name must not be the same as any tables contained by that schema.
You can build views on other views. Microsoft SQL Server allows views to be nested.
Nesting may not exceed 32 levels. The actual limit on nesting of views may be less
depending on the complexity of the view and the available memory.
You cannot associate rules or DEFAULT definitions with views.
You cannot associate AFTER triggers with views, only INSTEAD OF triggers.
The query defining the view cannot include the COMPUTE or COMPUTE BY clauses,
or the INTO keyword.
The query defining the view cannot include the ORDER BY clause, unless there is also a
TOP clause in the select list of the SELECT statement.
The query defining the view cannot contain the OPTION clause specifying a query hint.
The query defining the view cannot contain the TABLESAMPLE clause.
You cannot define full-text index definitions on views.
You cannot create temporary views, and you cannot create views on temporary tables.
Views, tables, or functions participating in a view created with the SCHEMABINDING
clause cannot be dropped, unless the view is dropped or changed so that it no longer has
schema binding. In addition, ALTER TABLE statements on tables that participate in
views having schema binding will fail if these statements affect the view definition.
If a view is not created with the SCHEMABINDING clause, sp_refresh view should be
run when changes are made to the objects underlying the view that affect the definition of
the view. Otherwise, the view might produce unexpected results when it is queried.
You cannot issue full-text queries against a view, although a view definition can include
a full-text query if the query references a table that has been configured for full-text
indexing.
4.3 IMPLEMENTATION OF SQL TABLE & RELATION SHIP
The following list describes rules for which you should consider SQL implementation.
Your rule uses a SCOPE expression that generates a very large Member Set.
Your rule causes many cell values to be written to the fact table.
For example, a typical use of a rule with SQL implementation is a forecast. In this
scenario, the rule might copy all existing fact data, increase all values by 10 percent, and
then replace the original facts with the increase values.
Your rule uses only the supported subset of Performance Point Expression Language
(PEL) functions.
Your rule does not use aggregated values, or it performs simple or no aggregation in its
calculations.
For example, if your cube contains sales fact data, the SQL Server Analysis Services
server preprocesses aggregated data and caches the data in the cube. However, Planning
Business Modeler translates a rule with SQL implementation into an SQL stored
procedure. The stored procedure cannot retrieve values from the cube, but instead must
recomputed all the aggregated values.
Conclusion
This Project provides support for call taxi office’s management and operators to maintain
and manage telephone booking orders. The system provides a number of value added
functionalities from empowering the booking handling process and driver management to
managing the drivers` records.
This project has been integrated with the latest and route mapping and vehicle tracking
functionality, along with our own easy paperless booking, easy to use dispatching system,
finding/editing bookings, regular bookings, accurate pricing based on route and vehicle type,
adding/editing plots, calculating driver wages, reports, and many more tools to help.
BIBILIOGRAPHY
Good Teachers are worth more than thousand books, we have them in Our
Department
References Made From:
1. Application Development Using C# and .Net
By Michael Stiefel, Robert J. Oberg
2. Professional C# 2005 with .Net 3.0
By Christian Nagel, Bill Evjen, Jay Glynn, Karli Watson, Morgan
Skinner.
3. Windows Forms Programming in C#
By Chris Sells
4. Beginning Visual C# 2005
By Karli Watson, Christian Nagel, Jacob Hammer Pedersen, Jon D.
Reid, Morgan Skinner, Eric White
Sites Referred:
http://www.dotnet-tutorial.com
http://www.networkcomputing.com/
http://www.xml.com/pub/r/838
http://www.c-sharpcorner.com/
Table Design
Area Details
Billing Details
Booking
CarModel
Customer Details
Driver Details
Driver Attendance
Driver Location
Tariff
.
Extra Tariff
Screens