Vision of the CSE Department - Supreme Knowledge ...skf.edu.in/Downloads/m3.pdfDigital Watermarking...
Transcript of Vision of the CSE Department - Supreme Knowledge ...skf.edu.in/Downloads/m3.pdfDigital Watermarking...
I –Brook (Volume 1, Issue 3) July – December 2017 |1
Vision of the CSE Department:
The vision of the CSE department is to develop a world class leading department of
Technical Education in Computer Science & Engineering to cater the national and
international demand and necessity for quality computer engineer for a better world.
The department will also be a centre of excellence in research and education for the
generation of innovative knowledge and technology in the field of Computer Science and
Engineering.
Mission of the CSE Department:
M1: To provide quality technical education in the area of Computer Science and
Engineering with strong fundamentals, through periodically updated curriculum, best of
modern laboratory facilities, collaborative venture with the industries and effective teaching
learning process.
M2: To impart technical education that may provide innovative skills in their respective
area of specialization for Industries, Academics and Society at large.
M3: To generate and dissimilate innovative knowledge and the emerging technologies in
the field of Computer Science and Engineering essential to the local and global need.
M4: To develop competent man power with deep awareness in human values and corporate
ethics - creating globally acceptable high quality skilled, potential, professional computer
engineers for Industries, Academics and Society at large.
M5: To develop communication skills, team work and leadership quality among students
through continuous rigorous grooming by Industry Professionals.
Program Educational Objectives of the CSE Department:
PEO 1. To educate and train students in the fundamentals of Computer Science & Engineering,
Basic Science and Engineering in order to analyze and solve computing problems, as
demonstrated by their professional accomplishments in industry, academics, government sectors.
PEO 2. To educate and train students with an understanding of real-world computing needs, as
demonstrated by their ability to address current technical issues involving computing problems
encountered in industry, academics, government sectors.
PEO 3. To train students to work effectively, professionally and ethically in computing-related
professions, as demonstrated by their communications, teamwork and leadership skills in industry,
academics, government sectors and society at large.
I –Brook (Volume 1, Issue 3) July – December 2017 |2
The Department of Computer Science and Engineering, Supreme Knowledge
Foundation Group of Institutions is elated to declare the advent of I-Brook for the
beloved students of the department.
This magazine aims to provide information of the latest technological progression in
the field of Computer Science & Engineering. The students can then incorporate these
ideas not only in their curriculum but also instill the essence of the modern trends into
their conscience inspiring for greater knowledge and better understanding of the
subject. I-Brook hopefully will be successful in its endeavor to reach out to all of its
readers the pure bliss of excavating new technologies in the arena of Computer Science
& Engineering, all over the world. This being the third issue of the first volume, a
gratifying acknowledgment to all its contributors and readers to help it reach its
epitome of triumph through diffusing education amongst one and all.
Happy Reading!!
Editor — Mr. Amitava Halder
Co-editors —Bidisha Bhabani,
Koyel Chakraborty
Magazine Title — Trisha Dey , Student ,4th year CSE, SKFGI
Logo and Cover Page Design -Mr. Joy Chatterjee
Members - Dr.Rajib Bag,Manab Kumar Das, Sayon Ghosh,Sonali Banerjee,Aritra
Bandopadhyay,Kaustuv Deb,Sathi Roy, Rudra Prasad Chatterjee, Imraj Malik, Soumen
Moulik,Sourabh Koley, Avijit Batabyal,Sanjit Mazi, Sudeshna Sanpui, Jaya Das, Sumita Dutta.
I-Brook’s Purpose!!
I –Brook (Volume 1, Issue 3) July – December 2017 |3
I am pleased to learn that the department Computer Science and Engineering is
going to publish a technical magazine entitled ‗iBROOK‟. This is a good initiative
by the department. Various technological developments may be highlighted in this
magazine. Through this magazine the students and faculties will get an opportunity
to expose their diversified talents which may not be limited to publishing technical
papers only. In order to stay competitive in the market it is necessary to remain well
informed and participate in all activities for their all-round development. I am sure
the magazine will provide the right platform for all to expose their art, literary and
photography talent.
I wish the editorial board all the success.
Dr. Amit Kumar Aditya, Director & Chief Academic Advisor, SKFGI
I am really very proud and delighted to bring out this first issue of departmental
technical magazine entitled ‗iBROOK‘. It is the perfect vehicle for periodic
communication. It is a great way for educators to communicate and share new
ideas with each other. It strives for an intellectual endeavor that focuses on critical
and creative thinking with the aim of social transformation.
Department has a unified team of qualified and experienced teachers, we are
striving hard continuously to improve the quality of education and to achieve
towards the mission and vision of the department. The department helps the
students to develop their overall personality and make them worthy technocrat to
compete and work at global level for which various professional student chapters
like IEEE, CSI, IET are formed in Institutional level.
I admire the efforts taken by editorial board in presenting the thoughts of young
engineers and best wishes.
Dr. Rajib Bag, HOD, CSE, SKFGI
It gives me a great pleasure to see that the department of Computer Science and
Engineering has come up with their first technical magazine 'iBROOK'. It is very
important to start this kind of activities, which is very much required for a technical
institute.
I am happy to see that the department is contributing to the needs of technology in
the country. The well thought contents of the magazine have brilliantly thrown
some light on the latest trends and important topics in the field of computer science
and information technology. Not only the faculties, but also the students and even
the alumni have put a lot of effort in this magazine. This technical magazine
represents the spirit and hidden technical talents of the students. This would
definitely encourage the other departments to come up with their technical
magazines so that the students get a chance to showcase their young intelligent
minds.
Congratulations to the team for providing such a good platform of knowledge
sharing. Best Wishes,
Bijoy Guha Mallick, Chairman, Trust, SKFGI
I –Brook (Volume 1, Issue 3) July – December 2017 |4
From the Editor:-
Etymologically, the word curriculum is derived from the Latin word ―Currere‖ which
means a racecourse or a runway, on which one runs to reach goal. Education as a
learning tool should help to pursue the students to achieve their goals, ideals and
aspirations of life. Keeping all this in mind, we have "Added another feather in the cap"
by releasing our technical magazine „iBROOK‟. This is a time of great changes. In
education too we see fast changes. The publication of this magazine is a major
milestone in the progress and development of the department, just like an army
marching on its stomach. The magazine will open a window of opportunity to many
people who will know that as an institution, we are destined to the bright future.
The student today is an individual, is a real person with feelings of self-respect,
sensitivity, responsibility and compassion. I wish all a great success in the near future.
Finally, I would like to express my deep gratitude to the Editorial Board for their
support and also thanks students and all stakeholders for their whole hearted
contribution for the success of the third issue of the 1st volume of „iBROOK‟.
Thank you and God Bless.
Amitava Halder, Asst. Prof., CSE Dept., SKFGI
I –Brook (Volume 1, Issue 3) July – December 2017 |5
Dedicating the 3rd Issue of the 1st Volume to Veteran Actor Om Puri
•Student's Corner
•Faculty CornerPart A
•Departmental AchievementsPart B
I –Brook (Volume 1, Issue 3) July – December 2017 |6
Contents
Students’ Corner ~
Faculty Corner~
Serial No. Article Name Written By Page
Number
1. Android Hani Singh, CSE 1st Year 7
2. 5 Pen PC Technology Naman Agarwal, CSE 2nd
Year 10
3. 5G Technology Naman Agarwal, CSE 2nd
Year 12
4. CPU‘s Arpan Das, CSE 2nd
Year 14
5. A mysterious new operating system - Fuchsia Soumya Ghosh, CSE 2nd
Year 15
6. IOT to protect the environment Sabina Parveen , CSE 2nd
Year 16
7. Snapdragon 835 Mobile Platform Navin Gupta, CSE 2nd
Year 18
8. Python: The Master of Language Soumodeep Mukherjee, CSE 2nd
Year 21
9. A point wise glance at Sixth Sense Technology Debjit Das, CSE 2nd
Year 24
10. Video Game Development Darshan Bhattacharya, CSE 2nd
Year 27
11. WANNACRY says wanna cry? Shivam Manna, CSE 2nd
Year 30
12. Blue Eyes Technology Debolina Ghosh, CSE 3rd
Year 32
13. Basic concepts of Network security and attacks Chiranjit Das, CSE 3rd
Year 34
14. Digital Jewelry Sumana Paul, CSE 3rd
Year 38
15. DNA Storage Anish Majumdar, CSE 3rd
Year 42
16. Fortran Govind Kumar Prajapati, CSE 3rd
Year 45
17. Internet of Things(IoT) Sudipta Das, CSE 3rd
Year 46
18. Paper Battery Megha Biswas, CSE 3rd
Year 48
19. Pill Camera Disha Mukherjee, CSE 3rd
Year 51
20. SpaceX BFR – Anywhere on Earth in under an
hour Sagar Prasad, CSE 3
rd Year 54
21. The apps you need to survive a natural disaster Alisha Neogi, CSE 3rd
Year 58
22. Augmented Reality Biswadeepam Pal, CSE 4th
Year 61
23. Wireless ad hoc networks Deep Narayan Biswas, CSE 4th
Year 64
24. Digital Cash Ripa Ghosh, CSE 4th
Year 68
25. DNA chip or Microarray Gobinda Santra, CSE 4th
Year 71
26. Cuckoo Search Gouranga Mondal, CSE 4th
Year 77
27. Thermography Ishani Dey, CSE 4th
Year 79
28. Touchscreen Kaustav Nandy, CSE 4th
Year 82
29. Virtual Lan Technology Nayanika Saha, CSE 4th
Year 90
30. GSM Security & Encryption Neha Chowdhury, CSE 4th
Year 93
31. Digital Watermarking Applications Parasmita Gupta, CSE 4th
Year 101
32. iOS- Mobile operating System by Apple Puja Mishra, CSE 4th
Year 108
33. Digital Signature Rohit Shaw, CSE 4th
Year 110
34. Fingerprint Recognition Technology Sagnik Sen, CSE 4th
Year 112
35. Gait Recognition Saket Kumar, CSE 4th
Year 113
Serial No. Article Name Written By Page
Number
1. Hyper-threading : A New Era for Processor
Speed up Mr. Amitava Halder, CSE 119
I –Brook (Volume 1, Issue 3) July – December 2017 |7
―ANDROID‖
Hani Singh
CSE – 1st year
INTRODUCTION: Android is a Linux based operating system it is designed primarily for touch
screen mobile devices such as smart phones and tablets, computers. The operating system has
developed in last 15 years starting from black & white phones to recent smart
phones or mini computers. One of the most widely used mobile OS these days
is android. The android is software that was founded in Palo Alto of California
in 2003. These applications are more comfortable and advanced for the users.
The hardware that supports android software is based on ARM architecture
platform. The android is an open source operating system means that it is free
and anyone can use it. The android has got millions of apps available that can
help you managing your life one or other way and it is available at low cost in
market at that reason android is very popular.
The android development supports with the full java programming language. Even other
packages that are API & JSE are not supported. The first version 1.0 of android development kit was
released in 2008 &latest updated version is Jelly bean.
ANDROID
ARCHITECTURE:
The android is a operating
system and is a stack of software
components which is divided
into five sections and three main
layers that is:
1. Linux Kernel
2. Libraries
3. Android Runtime
APPLICATIONS: Android initially came into existence that developments are given the power and
freedom to create enthralling mobile applications while taking advantage of everything that the
mobile handset has to offer.
Android is built on open Linux Kernel. This particular software for mobile application is made to be
open source, giving the opportunity to the developers to introduce and incorporate any technological
advancement. Build on custom virtual machine android gives its users the addition usage and
application power, to initiate an interactive and efficient application and operational software for your
phone.
PART – A
I –Brook (Volume 1, Issue 3) July – December 2017 |8
1. Android applications are composed of one or more application components (activities,
services, content providers and broadcast receivers).
2. Each component performs a different role in the overall application behavior and each one
can be activated individually (even by other application).
3. The manifest file must declare all components in the application and should also declare all
applications requirement such as the minimum
version of android required and any hardware
configurations required.
4. Non code application resources should include
alternatives for different device configuration.
FEATURES: The features of android are as follows:
1. Head set layout
2. Optimized graphics
3. Connectivity: GSM/EDGE, IDEN, CDMA, WIFI,
EDGE, 3G, NFC, LTE, GPS.
4. Messaging: SMS, MMS, C2DM (could to device
messaging), GCM (Google could messaging).
5. Multilanguage support
6. Multi touch
7. Video calling
8. Screen capture
9. Streaming media support, external storage
ADVANTAGES:
Android is Linux based open source operating system it can be developed by anyone.
Easy access to the android apps.
You can replace the battery and mass storage, disk drive and UDB option.
Its supports all Google services.
The operating system is able to inform you of a new SMS and emails or latest updates.
It supports multitasking.
Its supports 2d and 3d graphics.
Can install a modified Rom.
There is always a remainder of notification on the home screen of android phones.
Android phones provide us bigger screen in less price as compared to IPHONES.
DISADVANTAGES:
Android is very heavy operating system and most apps tend to run in the background
even when closed by the user. This eats up battery power even more. As a result, the
phone invariably ends up failing the battery life estimates given by the manufacturers.
Some phones tend to drastically loss efficiency if dozens of apps are installed.
Data safety is another problem and the fear of losing data forever always hovers over
users. While there are several apps that help backup data none are tightly knit into OS.
The android app store is open to every publisher. It‘s easier to get apps published in the
play store as the space is not continuously monitored. Therefore, most android apps are
I –Brook (Volume 1, Issue 3) July – December 2017 |9
half-backed and also not malware proof. This nullifies any innovativeness the apps have
to offer.
When we run large apps/games most of the time android shows error force close which is
definitely annoying.
Not all the apps available in the store are compatible with different levels or ranges of
android phones.
App crashing or forced closure is a norm with android devices and staunch android phone
users have now gotten used to this flaw.
CONCLUSION: Android has grown rapidly over the past 4 years becoming
the most used smart phone operating system in the world. It‘s because
android doesn‘t release 1 phone from 1 company, with one new OS every
year but countless phones from numerous companies, adding their own twist
throughout the year, developing gradually day-by-day. Android‘s ability to
customize is unparallel compared to Apple‘s and Microsoft‘s Software
allowing the user to change and customize nearly every aspect of android
which most IPHONES and windows 7 users wouldn‘t dream possible. I am
not one to say that android is better or worse than as but android is unique
and incomparable to other mobile operating system.
The next version of Android to be released is Android Oreo which
will be applicable in early April 2018. In terms of feature
highlights, Oreo focuses on speed and efficiency. For many phones
updated to Android 8.0, another name for Oreo, boot speeds will be increased as much as
two times. While it's light on visual changes, Oreo packs in some useful design tweaks,
like picture-in-picture (PiP) mode for the likes of YouTube,
Hangouts and others, as well as notification dots that give you
a colorful nudge to check out your notifications.
I –Brook (Volume 1, Issue 3) July – December 2017 |10
―5 PEN PC TECHNOLOGY‖
Naman Agarwal
CSE II– 2nd
year
5 Pen PC Technology is a gadget package including five functions: a pen-style cellular phone with a
handwriting data input function, virtual keyboard, a very small projector, camera scanner, and
personal ID key with cashless pass function.
When writing a quick note, pen and paper are still the most natural to use. The 5 pen pc
technology with digital pen and paper makes it possible to get a digital copy of handwritten
information, and have it sent to digital devices via Bluetooth.
P-ISM (Pen-style Personal Networking Gadget Package), which is nothing but the new discovery
which is under developing stage by NEC Corporation. It is simply a new invention in computer
and it is associated with communication field. Surely this will have a great impact on the computer
field. In this device you will find Bluetooth as the main interconnecting device between different
peripherals.
P-ISM is a gadget package including five functions: a pen-style cellular phone with a handwriting
data input function, virtual keyboard, a very small projector, camera scanner, and personal ID key
with cashless pass function. P-ISMs are connected with one another through short-range wireless
technology. The whole set is also connected to the Internet through the cellular phone function.
This personal gadget in a minimalist pen style enables the ultimate ubiquitous computing.
Working Principle
A computer that utilizes an electronic pen (called a stylus) rather than a keyboard for input. Pen
computers generally require special operating systems that support handwriting recognition so that
users can write on the screen or on a tablet instead of typing on a keyboard. Most pen computers
are hand-held devices, which are too small for a full-size keyboard.
How does it work?
The P-ISM (Pen-style Personal Networking Gadget Package) consists of a package of 5 pens that
all have unique functions, combining together to create virtual computing experience by producing
both monitor and keyboard on any flat surfaces from where you can carry out functions that you
would normally do on your desktop computer. P-ISM‘s are connected with one another via a short-
range (Bluetooth) wireless technology. The whole set is connected to the Internet through the
cellular phone function.
The five components of P-ISM:
I –Brook (Volume 1, Issue 3) July – December 2017 |11
1. CPU pen:
The functionality of CPU is done by one of the pens. It is also called computing engine.
2. Communication pen:
P-ISMs are connected with one another through short-range wireless technology. The whole set is
also connected to the Internet through the cellular phone function. They are connected through Tri-
wireless modes (Bluetooth, 802.11B/G, and Cellular) which are made small and kept in a small
pen like device.
3. Virtual keyboard:
The virtual keyboard works on any flat surface which uses a camera to track the finger
movements. On this specific keyboard, this is done by a 3D IR sensor technology with laser
technology to get a full size keyboard. You can also change the language input and the layout of
the keyboard. This is more efficient than normal keyboards because you don‘t have to buy a new
keyboard for every language. They are also easy to maintain as they are prone to damage by spills,
drops and other malfunctions.
4. LED projector:
The role of the monitor is taken by the LED projector. LED projectors use LCD technologies for
image creation with a difference as they use an array of Light Emitting Diodes as the light source,
negating the need for lamp replacement. Also it would not need as much energy to used and with a
longer lifetime. The size of the screen is approximately 1024 × 768 px which is a size of an A4
paper.
5. Digital camera:
We had digital camera in the shape of pen .It is useful in video recording, videoconferencing;
simply it is called as web cam. It is also connected with other devices through Bluetooth. The
major advantage it is small which is easily portable. It is a 360- Degree Visual Communication
Device. We have seen video phones hundreds of times in movies. Conventional visual
communications at a distance have been limited due to the display devices and terminals. This
terminal enables showing of the surrounding atmosphere and group-to-group communication with
a round display and a central super-wide-angle camera.
Fulgor Nocturnus by Tibaldi — $8 Million
Florentine pen maker Tibaldi specializes in proportion,
design, and excellent technical execution. Based on the
Divine Proportion of Phi, the ratio between the cap and the
visible portion of the barrel of the Fulgor Nocturnus equals
exactly 1.618 when the pen is closed. This gorgeous writing
instrument is encrusted with 945 black diamond's with 123
rubies around the rim. Don’t look for the Fulgor Nocturnus
in any store; only one was ever made – and it sold for $8
million at a charity auction in Shanghai. ( If possible read ―The Da Vinci Code by Dan Brown to
learn more about Phi!!)
`
I –Brook (Volume 1, Issue 3) July – December 2017 |12
"5G TECHNOLOGY"
Naman Agarwal
CSE II – 2nd
year
INTRODUCTION :
The time has totally changed nowadays when compared to the ancient days and we are becoming
more and more advanced with the applied sciences we have. Those days are gone when the people use
to connect with the aid of letters, telegrams, and land line phones. What we see today is the
sophisticated world of technology with the innovation and advancement.
The exchange of information or communication with the friends, relatives, and dear ones has become
very easy and simple that just with a mobile phone we can be in touch with all of them.5G technology
is the abbreviation of the fifth generation mobile technology. Wireless communication has
commenced in early 1970‘s and after four decades of it, the technology has evolved from 1G to 5G.
From 1G to 5G the world of telecommunication is totally changed and now the aim of such industries
is to furnish the best of the best services to the customers. The technical people worked very hard to
furnish a smooth, undisturbed network and at last they released 5G technology which aims for such
wireless telecommunication network. This 5G network offers the data bandwidth of greater than
1Gbps, furnishes CDMA multiplex and has the internet as the core network. Well, the 5G is not
completely released but there are few countries which are using the 5G technology.
The fifth generation technology offers high bandwidth, many developed features and due to these
parameters; it will have a huge demand in the future. Now-a-days various wireless and mobile
technology networks are in use like fourth generation mobile networks, third generation mobile
networks (UTMS which stands for Universal Mobile Telecommunication system, CDMA 2000),
Long Term Evolution (LTE), Wi-Fi which is an IEEE 802.11 wireless network, WiMAX which is an
IEEE 802.16 wireless network and mobile networks, sensor networks and personal area networks
which includes Bluetooth and ZigBee.
To all the wireless and mobile networks, the data and
the signal are transferred through the IP i.e. internet
protocol or the network layer. Now, coming to the
fifth generation technology it furnishes all the
necessary and required facilities like mp3 recording,
camera, video and audio player, large phone memory
and with many more applications which the user have
never imagined before.
This new period of telecommunication is going to
begin and surely changes everything related to the
cellular industries. In the coming years, the 5G
technology will be in use because of its advancement,
affordable cost, and possess a bright future that will
keep it safe for years. The mobile multimedia internet
networks can be totally the wireless without any
limitations and make the networks as worldwide wireless web (WWWW).
I –Brook (Volume 1, Issue 3) July – December 2017 |13
This fifth generation is based on 4G technology as it is an advanced form of 4G and the internet
networks are truly wireless which are supported by LAS-CDMA which stands for Large Area
Synchronized-Code- Division Multiple Access, OFDM which stands for Orthogonal Frequency
Division Multiplexing, MC-CDMA which stands for Multi-Carrier Code Division Multiple Access,
UMB which stands for Ultra-wideband, Network-LMDS which stands for Local Multipoint
Distribution Service and Ipv6.
At the same time, 5G technology offers very large data capabilities, unlimited data broadcast with the
mobile operating system. It makes the vital difference, give more services and advantages to the world
when compared to 4G. The people are not availing it as these are just theories but before the
innovation of 4G it was also a theory and now it is in existence which proves that theories are the base
of every innovation. So, it is an intelligently applied science and connects with the entire world
without any limits, the expected release of this technology is around is 2020.
Advantages:
The advantages of the 5G technology are as follows:
It posses very high speed, high capacity and low cost per bit.
It supports multimedia, voice, and internet.
It also offers global access and service portability.
It has very high uploading and downloading speed.
It offers high resolution and bi-directional large bandwidth for mad mobile users.
Disadvantages:
The following are the disadvantages of the 5G technology:
It is an under processing technology.
It is difficult to get a high seed in some parts of the world.
Security and privacy are yet to be solved.
Many old machines will not support 5G.
Applications:
The 5G technology has the following applications in different fields:
Helps in knowing weather, location and can control the OC‘s by the handsets.
The education system has its applications to make the learning of education much easier.
At the same time, it has the applications in the medical field also.
Natural disasters can be detected; can visualize the universe, planets, and galaxies.
Thus, the applications and advantages might increase and will have a wide range of applications in the
future
"The mind is everything. What you think you become.- Gautama Buddha"
I –Brook (Volume 1, Issue 3) July – December 2017 |14
"CPU‘s"
Arpan Das
CSE II - 2nd
year
2017 has been a revolutionary year for computer world. With the release of new processor lineups
from Intel and AMD it has been a great year for gamers and creators to build a new PC.
For last 5 years Intel has been the undisputed champion in the AMD-Intel rivalry. With the last
release of ―Bulldozer‖ architecture based CPU namely ―FX-series‖ from AMD, it has been quiet an
upset from the red team as their architecture was not good enough for Intel‘s next gen CPUs.
However in late 2016 AMD announced their new releases for 2017. Ryzen it was. A ―Zen‖ based
14nm architecture. It was initially released to compete against Intel 6th gen CPUs(Skylake) which
Ryzen defeated very easily. Surprisingly it did significantly well against the Intel 7th gen(kabylake)
lineups .
Ryzen released its new line up with Ryzen 7 lineup which had 3 models (1700,1700x,1800x) all with
8 cores and 16 threads to compete I7 lineups. Later in mid year they released Ryzen 5 having 2 4 core
16 thread models(1400,1500x) and 2 6 core 12 thread models(1600,1600x) to compete against I5
lineups. Lastly in the last quarter of 2017 AMD released Ryzen 3 CPU which had 2
models(1200,1300x) all 4 core 4 thread processors to dethrone I3. Now in the last quarter of 2017
Intel announced I9 line ups with more than 10 cores and double threads processors. So AMD also
launched Ryzen Threadripper with 12 cores and 16 core CPUs having double threads.
So all these said now let‘s see how the CPUs performed. For Intel let‘s consider 7th gen kabylake
CPUs for a fair competition. Intel has always been the king of single core performance which mostly
favored gaming as hardly any game utilizes more than one core. This gave Intel a edge over Ryzen as
AMD single core performance was still a bit lower than Intel. But in the case of stuff like video
editing, 4k streaming which loves more cores Ryzen took a huge lead. As the highest core count of
Intel was 4 core and 8 threads which only I7 lineup had and also having lesser cache memory then
Ryzen CPUs kabylake did lose a huge edge to AMD. Even Ryzen 5 1600 and 1600x beat I7 in
multicore performance.
Intel answered this with their late release of 8th gen Coffee lake Processors having higher potentials
and beat Ryzen in most applications despite having less cores. This was indeed a great comeback for
Intel. But AMD is also preparing their reply with next gen of Ryzen CPUs expected to release next
year.
So 2017 was indeed a great year for CPUs. The long lost rivalry was re-lived. Like phoenix AMD
rose from the ashes and gave a tough competition to the legions of Intel. We hope to see more from
these two titanic companies in upcoming years.
I –Brook (Volume 1, Issue 3) July – December 2017 |15
―A mysterious new operating system - Fuchsia.‖
Soumya Ghosh
CSE I – 2nd
year
Fuchsia is a capability-based, real-time operating system (RTOS) currently being developed
by Google. It was first discovered as a mysterious code post on GitHub in August 2016, without any
official announcement. In contrast to prior Google-developed operating systems such as Chrome
OS and Android, which are based on Linux kernels, Fuchsia is based on a new microkernel called
"Zircon", derived from "Little Kernel", a small operating system intended for embedded systems.
Upon inspection, media outlets noted that the code post on GitHub suggested Fuchsia's capability to
run on universal devices, from embedded systems to smartphones, tablets and personal computers. In
May 2017, Fuchsia was updated with a user interface, along with a developer writing that the project
was not a "dumping ground of a dead thing", prompting media speculation about Google's intentions
with the operating system, including the possibility of it replacing Android.
Fuchsia's user interface and apps are written with "Flutter", a software development kit allowing
cross-platform development abilities for Fuchsia, Android and iOS. Flutter produces apps based
on Dart, offering apps with high performance that run at 120 frames per second. Flutter also offers a
Vulkan-based graphics rendering engine called "Escher", with specific support for "Volumetric soft
shadows", an element that ―seems custom-built to run Google's shadow-heavy "Material Design"
interface guidelines".
Due to the Flutter software development kit offering cross-platform opportunities, users are able to
install parts of Fuchsia on Android devices. It is noted that, while users could test Fuchsia, nothing
"works", adding that "it's all a bunch of placeholder interfaces that don't do anything", though finding
multiple similarities between Fuchsia's interface and Android, including a Recent Apps screen, a
Settings menu, and a split-screen view for viewing multiple apps at once.
I –Brook (Volume 1, Issue 3) July – December 2017 |16
―IoT to protect the environment!!‖
Sabina Parveen
CSE I - 2nd
year
Internet of Things can protect the environment? Get ready to be amazed................
Internet of Things (IoT) has a large role to play in the future of smart cities. IoT can be used in
practically all scenarios for public services by governments to make cities environment friendly.
Sensor-enabled devices can help monitor the environmental impact on cities, collect details about
sewers, air quality, and garbage. Such devices can also help monitor woods, rivers, lakes and oceans.
Many environmental trends are so complex
that they are difficult to conceptualise. IoT is a
recent communication paradigm that envisions
a near future, in which the objects of everyday
life will be equipped with micro-controllers,
transceivers for digital communication, and
suitable protocol stacks that will enable them to
communicate not only with one another but
also the users, becoming an integral part of the
internet and the environment. IoT
environmental monitoring applications usually
use sensors to lend a hand in environmental
protection by monitoring air or water quality,
atmospheric or soil conditions, and can even
include areas like monitoring the movements of wildlife and their habitats.
An urban IoT platform can provide means to monitor the quality of the air in crowded areas, parks, or
fitness trails. From real time monitoring of water quality in the ocean through sensors connected to a
buoy that sends information via the GPRS network, to the monitoring of goods being shipped around
the world, and smart power grids that create conditions for more rational production, planning and
consumption can all be achieved via microchips implanted in objects that communicate with each
other.
Some applications related to the IoT aren‘t new: toll collection tags, security access key cards, devices
to track stolen cars and various types of identity tags for retail goods and livestock. Other monitoring
and tracking systems have more business uses such as solving or averting problems like sending a
I –Brook (Volume 1, Issue 3) July – December 2017 |17
cellphone alert to drivers that traffic is backed up at a particular exit ramp, and increasing efficiencies
such as enabling a utility to remotely switch off an electric meter in a just-vacated apartment. ICT
empowered atmosphere relief procedures could diminish worldwide environmental change 16.5% by
2020 contrasted with current endeavours.
For India‘s Smart City programme to flourish,
waste management would play a very important
role. In the current scenario, waste management
process is in shambles and government is
struggling to find ways for eco-friendly
disposal. IoT solutions and devices for waste
management revolves around two main
benefits: determining the best time to collect
waste and figuring out what route trucks should
follow. These two advantages can reduce the
time it takes to address potential waste build-up problems. In waste disposal, technologies like IoT
can help the city administration in controlling the amount of waste disposed at regular intervals
thereby avoiding build up and using the end residue for other developmental activities like road
building or supplying residue gas to power stations, etc.
In a few years time, water would be the most precious commodity in India. It would be more
expensive than oil and gold. In India, over 70% of the
population is employed in agriculture; water management is
extremely crucial as water is a scarce commodity. Water
management and precision agriculture should almost always
be discussed together for a number of reasons. The
deployment of sensors and actuators provides farmers with
increased visibility over their operation, allowing them to
optimise water usage and minimise waste by assessing a
number of metrics including temperature, water pressure and
quality. IoT-enabled water management can also be done on a
consumer-level with the installation of smart water sensors in
homes and apartments. Those devices, combined with data
analytics, can give residents more visibility into the amount of
water they use, potentially saving money and conserving this
precious resource.
Deforestation is another issue that is impacting not only India but the global environment. Here in
drone technology has been used to prevent and fight forest fires, they are also now part of an initiative
started by BioCarbon Engineering to replant 1 billion trees lost from deforestation. Currently, more
than 6.5 billion trees are lost each year due to human activities and natural disasters, according to the
company. IoT is an eco-friendly technology which benefits not only the environment but mankind as
whole.
I –Brook (Volume 1, Issue 3) July – December 2017 |18
"Snapdragon 835 Mobile Platform"
Navin Gupta
CSE II – 2nd
year
With an advanced 10-nanometer design, the Qualcomm Snapdragon 835 mobile platform can support
phenomenal mobile performance. It is 35% smaller and uses 25% less power than previous designs,
and is engineered to deliver exceptionally long battery life, lifelike VR and AR experiences, cutting-
edge camera capabilities and Gigabit Class download speeds.
Qualcomm Snapdragon processors are a product of Qualcomm Technologies, Inc.
Snapdragon 835 mobile platform advancements:
Small in size, big on features
The Snapdragon 835 mobile platform is designed to quickly and efficiently support extraordinary
experiences on your mobile device, integrating cutting edge technologies—all on a single 10 nm chip.
Gigabit Class connectivity
The Snapdragon X16 LTE modem is designed to deliver peak download speeds up to one Gigabit per
second. That‘s 10X as fast as first-generation 4G LTE, along with multi-Gigabit 802.11ad and
integrated 2x2 11ac MU-MIMO Wi-Fi—giving you wireless Internet access at fiber optic speeds.
Advanced Qualcomm Spectra camera ISP
The 14-bit Qualcomm Spectra 180 ISP supports capture of up to 32 megapixels with zero shutter lag,
and offers smooth zoom, fast autofocus and true-to-life colors for improved image quality. Dual 14-
bit ISPs support up to 32MP single or dual 16MP cameras for the ultimate photography and
videography experience
Efficient Hexagon 682 DSP
The Qualcomm Hexagon 682 DSP is designed to significantly improve performance and battery life,
and includes the Qualcomm All-Ways Aware sensor hub and Hexagon Vector extensions (HVX) for
optimal efficiency. Support for latest Machine Learning frameworks and image processing. Includes
Hexagon Vector extensions and Qualcomm All-Ways Aware Technology utilizing connectivity and
sensors
Powerful Kryo CPU
The Qualcomm Kryo 280 64-bit CPU built on ARM Cortex Technology has our most power-efficient
architecture to date—with independent efficiency and power clusters, each designed to optimize for a
unique user experience. Manufactured in 10nm FinFET to deliver our most power-efficient
architecture to date
Immersive visual graphics
Delivering up to 25% faster graphics rendering and 60x more display colors compared to previous
designs, the Qualcomm Adreno 540 GPU supports real-life-quality visuals for exciting immersive
experiences. Advanced 3-D graphics rendering and up to 60X more colors help deliver life-like
visuals for immersive experiences
Qualcomm Haven Security Suite
I –Brook (Volume 1, Issue 3) July – December 2017 |19
Get comprehensive user and device authentication with the Qualcomm Haven security suite, which
includes a full biometric suite for fingerprint scanning, voice, iris and facial recognition.
Incredible mobile experiences
The Snapdragon 835 mobile platform is designed to support experiences you have to see to believe.
From the lightning-fast streaming of video and audio, to alternate reality exploration, to machine
learning capabilities that can personalize your experience—the robust processing strength,
groundbreaking battery efficiency and superior connectivity of our mobile platform help bring
innovative user experiences to life.
Qualcomm Quick Charge 4 technology
20% faster, 30% more efficient than our previous generation,
Charge from zero to up to 50% in 15 minutes
Features and specifications:
GPU
+ Adreno 540 GPU
+ OpenGL ES 3.2, OpenCL 2.0 full, Vulkan, DX12
DSP
+ Hexagon 682 DSP with:
• Hexagon Vector extensions
• Qualcomm All-Ways Aware
• Tensor Flow and Halide support
• Qualcomm Neural Processing Engine (NPE) SDK
Display
+ UltraHD Premium-ready
+ 4K Ultra HD, 60 FPS
+ 10-bit color depth
+ Display Port, HDMI, and USB Type-C support
Audio
+ Qualcomm Aqstic audio codec and speaker amplifier
+ High 123dB SNR, Native DSD support
+ Qualcomm aptX audio playback with support for aptX Classic and
HD
CPU
+ 8x Kryo 280 CPU
+ Up to 2.45 GHz
+ 10nm FinFET process technology
Camera
+ Qualcomm Spectra 180 ISP
+ Dual 14-bit ISPs
+ Up to 16 MP dual camera
+ Up to 32 MP single camera
+ Qualcomm Clear Sight camera features, Hybrid Autofocus, Optical Zoom, hardware-accelerated
Face Detection, HDR Video Recording
I –Brook (Volume 1, Issue 3) July – December 2017 |20
Video
+ Up to 4K UltraHD capture @ 30 fps
+ Up to 4K UltraHD playback @ 60 fps
+ H.264 (AVC), H.265 (HEVC), VP9
Memory
+ LPDDR4x, dual channel
+ UFS2.1 Gear3 2L
+ SD 3.0 (UHS-I)
Charging
+Quick Charge 4.0 technology
+ Qualcomm WiPower technology
Connectivity
+ Qualcomm Wi-Fi 802.11ad Multi-gigabit
+ Wi-Fi integrated 802.11ac 2x2 with MU-MIMO
+ 2.4 GHz, 5 GHz and 60 GHz
+ Bluetooth 5.0
Optimized Software Solutions
+ Android and Windows OS
Security
+ Qualcomm Secure MSM technology
+ Qualcomm Haven Security Suite
+ Qualcomm Snapdragon Studio Access content protection
Modem
+ Snapdragon X16 LTE modem
+ Downlink: LTE Cat 16 up to 1 Gbps, 4x20 MHz carrier aggregation, up to 256-QAM
+ Uplink: LTE Cat 13 up to 150 Mbps, Qualcomm Snapdragon Upload+
(2x20 MHz carrier aggregation, up to 64-QAM, uplink data compression)
+ Qualcomm All Mode with support for all seven cellular modes plus
LTE-U and LAA. Support for:
• VoLTE with SRVCC to 3G and 2G, HD and Ultra HD Voice (EVS), CSFB to 3G and 2G
• Qualcomm® Signal Boost with carrier aggregation
Location
+ Qualcomm Location Suite Support for:
• GPS, Glonass, BeiDou, Galileo, and QZSS systems content protection
Rumours have suggested that Qualcomm will
likely be launching the new Snapdragon
845 processor at this year’s Snapdragon
Technology Summit in December.
I –Brook (Volume 1, Issue 3) July – December 2017 |21
"PYTHON: THE MASTER OF LANGUAGE"
Soumodeep Mukherjee CSE I – 2
nd year
As engineering students it is known to all of us that there exist so many languages which we need
to learn in our computer engineering. Python is one of the most popular languages emerging as a
first-class citizen in modern software development, infrastructure management, and data analysis in
our computer world. It is no longer a back-room utility language, but a major force in web
application development and systems management and a key driver behind the explosion in big
data analytics and machine intelligence. So let's start the History of python:
HISTORY OF PYTHON
The implementation of python was started in the December
1989 by Guido Van Rossum at CWI in Netherland. ABC
programming language is said to be the predecessor of
Python language which was capable of Exception Handling
and interfacing with Amoeba Operating System.
FEATURES
It is a fully object oriented programming language. That's why we can access Inheritance,
polymorphism, Constructor & Destructor, Class & Object, List, Tuples, Dictionary, Exception
handling etc.
EXAMPLE OF PYTHON
1. To Print ―Hello Python‖:
If we print this one statement, we must write this command given below: print (―Hello Python‖)
2. Print to sum of number using function:
def sum(x, y):
print("Sum of two number is:", x+y)
sum(20, 30)
So the easy process to execute any program.
DIFFERENCE BETWEEN OTHER LANGUAGES
1. Difference between Java:
Python programs are generally expected to run slower than Java programs, but they also take
much less time to develop. Python programs are typically 3-5 times shorter than equivalent
Java programs. This difference can be attributed to Python's built-in high-level data types and
its dynamic typing. For example, a Python programmer wastes no time declaring the types
I –Brook (Volume 1, Issue 3) July – December 2017 |22
ofarguments or variables, and Python's powerful polymorphic list and dictionary types, for
which rich syntactic support is built straight into the language, find a use in almost every
Python program. Because of the run-time typing, Python's run time must work harder than
Java's. For example, when evaluating the expression a+b, it must first inspect the objects a
and b to find out their type, which is not known at compile time. It then invokes the
appropriate addition operation, which may be an overloaded user-defined method. Java, on
the other hand, can perform an efficient integer or floating point addition, but requires
variable declarations for a and b, and does not allow overloading of the + operator for
instances of user-defined classes. For these reasons, Python is much better suited as a "glue"
language, while Java is better characterized as a low-level implementation language. In fact,
the two together make an excellent combination. Components can be developed in Java and
combined to form applications in Python; Python can also be used to prototype components
until their design can be "hardened" in a Java implementation. To support this type of
development, a Python implementation written in Java is under development, which allows
calling Python code from Java and vice versa. In this implementation, Python source code is
translated to Java byte code.
2. Difference between Javascript:
Python's "object-based" subset is roughly equivalent to JavaScript. Like JavaScript (and unlike Java), Python supports a programming style that uses simple functions and variables
without engaging in class definitions. However, for JavaScript, that's all there is. Python, on the other hand, supports writing much larger programs and better code reuse through a true
object-oriented programming style, where classes and inheritance play an important role.
3. Difference between C++:
Almost everything said for Java also applies for C++, just more so: where Python code is
typically 3-5 times shorter than equivalent Java code, it is often 5-10 times shorter than equivalent C++ code! Anecdotal evidence suggests that one Python programmer can finish in
two months what two C++ programmers can't complete in a year. Python shines as a glue language, used to combine components written in C++.
GRAPHICS IN PYTHON
In Python, Graphical representation is present here. We can create any kind of box, list, calculator, image & Icon, Windows Widgets, Drop down box etc.
APPLICATIONS FOR PYTHON
Python is used in many application domains. The applications are given below:
1. Web and Internet Development
Python offers many choices for web development:
Frameworks such as Django and Pyramid.
Micro-frameworks such as Flask and Bottle.
Advanced content management systems such as Plone and django CMS.
I –Brook (Volume 1, Issue 3) July – December 2017 |23
Python‘s standard library supports many Internet
protocols:
HTML and XML
JSON
E-mail processing.
Supported for FTP, IMAP, and other Internet
protocols.
Easy-to-use socket interface.
And the Package Index has yet more libraries:
Requests, a powerful HTTP client library.
BeautifulSoup, an HTML parser that can handle all
sorts of oddball HTML.
Feedparser for parsing RSS/Atom feeds.
Paramiko, implementing the SSH2 protocol.
Twisted Python, a framework for asynchronous network programming.
1. Business Applications
Python is also used to build ERP and e-commerce systems:
Odoo is an all-in-one management software that offers a range of business applications that form a complete suite of enterprise management applications.
Tryton is a three-tier high-level general purpose application platform.
2. Software Development
Python is often used as a support language for software developers, for build control and management, testing, and in many other ways.
SCons for build control.
Buildbot and Apache Gump for automated continuous compilation and testing.
3. Desktop GUIs
The Tk GUI library is included with most binary distributions of Python. Some toolkits that are
usable on several platforms are available separately:
wxWidgets
Kivy, for writing multitouch applications.
Qt via pyqt or pyside
Platform-specific toolkits are also available:
GTK+
Microsoft Foundation Classes through the win32 extensions.
I –Brook (Volume 1, Issue 3) July – December 2017 |24
5. Education
Python is a superb language for teaching programming, both at the introductory level and in
more advanced courses.
6. Scientific and Numeric
Python is widely used in scientific and numeric computing:
SciPy is a collection of packages for mathematics, science, and engineering.
Pandas is a data analysis and modelling library.
IPython is a powerful interactive shell that features easy editing and recording of a work
session, and supports visualizations and parallel computing.
The Software Carpentry Course teaches basic skills for scientific computing, running
bootcamps and providing open-access teaching materials.
7. Ethical Hacking
In this with Python course, you‘ll run through the fundamentals of all things understanding how to craft simple lines of code using variables and statements to setting up and using dictionaries. Once we‘ve covered the basics, we will go through tutorials including –
Syn Flood attack with Scapy,
Buffer overflow and exploit writing with Python
Forensic Investigation using hashlib and pypdf.
Though targeted towards complete beginners, this course also serves as a handy refresher for seasoned programmers who want to sharpen their coding skills or use python in some ethical hacking scenarios.
Conclusion:
Therefore, Python is the most useful and user-friendly language. It can help everywhere in our computer world. It is easy to use and also to learn. Python is not a ―toy‖ language. Even though scripting and automation cover a large chunk of Python‘s use cases, Python is also used to build robust, professional-quality software, both as standalone applications and as web services.
History behind coining the term "Python"
"At the time when he began implementing Python, Guido van Rossum was also reading the
published scripts from "Monty Python's Flying Circus" (a BBC comedy series from the seventies,
in the unlikely case you didn't know). It occurred to him that he needed a name that was short,
unique, and slightly mysterious, so he decided to call the language Python."
I –Brook (Volume 1, Issue 3) July – December 2017 |25
―A POINTWISE GLANCE at SIXTH SENSE TECHNOLOGY‖
Debjit Das
CSE II – 2nd
year
INTRODUCTION :
Sixth sense is a wearable gestural device that augments the physical world around us with the
digital information.
Technology that plays with human gestures to make the world more interactive and workflow
easier.
It is portable device that is worn around neck.
COMPONENTS :
Mobile components
A mirror
A pocket projector
Colored markers
A camera
A projector
TECNIQUE BEHIND :
The hardware that makes Sixth Sense work is
a pendent like mobile wearable interface.
It has a camera, a mirror and a projector and is connected wirelessly to a Bluetooth smart
phone that can slip comfortably into one‘s pocket.
The camera recognizes individuals, images, pictures, gestures one makes with their hands.
Information is sent to the Smartphone for processing.
The downward-facing projector projects the output image on to the mirror.
Mirror reflects image on to the desired surface.
Thus, digital information is freed from its confines and placed in the physical world.
SIXTH SENSE IN GAMING :
We can do all the kinds of gaming that exists now,but not only that,we can use the physical world
inside the game.You can play with physical stuff,invent some new games.Maybe you can hide
something in the physical world—open a book and hide something in the pages.
ADVANTAGES :
Portable
Inexpensive
Multi-sensory
Connectedness between the world and information
It is an open source
Data access directly from machine in real time
I –Brook (Volume 1, Issue 3) July – December 2017 |26
LIMITATIONS :
Software does support the ability to use real time video streams in order to produce
augmented reality.
Hardware limitations of the devices that we currently carry around with us.
For example many phones will not allow the external camera feed to be manipulated in real
time.
Post processing can occur however.
FUTURE OF SIXTH SENSE :
Interactive Advertisements.
True 3d point media.
3-d visualizations.
Solar batteries via small solar panel.
Camera can act as a third eye for the blind person
―According to researchers, after 10 years we will be here with the ultimate sixth-sense brain
implant.‖
CONCLUSION :
Sixth Sense recognizes the objects around us, displaying information automatically and
letting us to access it in any way need.
The Sixth Sense prototype implements several applications that demonstrate the usefulness,
viability and flexibility of the system.
Allowing us to interact with this information via natural hand gestures.
The potential of becoming the ultimate ―transparent‖ user interface for accessing information
about everything around us.
The Sixth Sense is a 1999 American supernatural horror film written
and directed by M. Night Shyamalan. The film tells the story of Cole
Sear (Haley Joel Osment), a troubled, isolated boy who is able to see
and talk to the dead, and an equally troubled child psychologist named
Malcolm Crowe (Bruce Willis) who tries to help him. The film
established Shyamalan as a writer and director, and introduced the
cinema public to his traits, most notably his affinity for surprise
endings. The film was nominated for six Academy Awards,
including Best Picture, Best Director for Shyamalan, Best Original
Screenplay, Best Supporting Actor for Osment, and Best Supporting Actress for Toni Collette.
I –Brook (Volume 1, Issue 3) July – December 2017 |27
"Video Game Development"
Darshan Bhattacharyya
CSE II – 2nd
year
Video game is nowadays very popular to all specially to younger generation. But it started about
57 years ago. In 1960 the first video game was developed. Back then they required mainframe computers to run and was not available to general public to play.
Commercial game development started with the advent of first generation video game console
and and early home computers like Apple I. Due to low cost and low capabilities of computer a lone
programmer could make a full video game. However now in 21st century creating a video game by a
single person has become very difficult for ever-increasing computer processing power and heighten
consumers‘ expectation. Currently the average cost of producing a high-end video game for mainframe
console or PC is over 20 million US dollar. In 2000 it was like 1 to 4 million US dollar and in 2006 it
crossed 5 million US dollars.
Generally a game development is done in some phases. First in pre-production, pitches,
prototype, and game design documents are written. Then when the project is approved the full scale
development starts. It involves hundreds of personals each are given various responsibilities. The team
includes artists, designer, programmer and tester.
Roles:
Producer: The development is overseen by
internal and external producer. The producer
working for developer is internal producer who
manages the development team, schedules, hire
staffs etc. And the producer working for the
publisher is external producer who oversees the
development progress and budget. Publisher: Video game publisher publishes the video game that either they have developed or have had developed by an external developer. Development team: Nowadays a video game development team includes a wide range of people starting
from artists to software engineer. There are various scopes in video game development industries. The
most represented are artists, followed by programmers then designers and finally the audio specialist.
These positions are employed full time, other position such as tester are employed only part-time. Designer: Game designer is the man visionary of the game. They designs the gameplay, conceiving and designing the rules and structure of the game.
I –Brook (Volume 1, Issue 3) July – December 2017 |28
Artist: A video game artist is a visual artist who creates the visual art for the game. Their job may be
3D or 2D oriented. A 2D artist designs the texture, sprits, and environmental backdrops etc. which are the concept art. A 3D artist does the modeling, creates the animations, 3D environment etc. Programmer: Game programmers are the software engineers of the development team who primarily develops the game. They have different development roles which are Physics: the programming of the game engine including simulating of physic. AI: producing game agents using game AI technique, such as scripting planning etc. Graphic: managing graphical content utilization,
producing of graphic engine and integration of model to work along the physic engine.
Sound: integration of sound, music, speech and sound
effect into the proper location and time. Gameplay: integration of various game rules and
features. Scripting: development and maintenance of various high-
level in game commands, such as AI, level editor triggers etc.
UI: development of various user interface elements such
as option menu, HUDs etc. Input processing: processing and compatibility
correlation of various input device such as mouse, keyboard, gamepad.
Game tools: the production of tools to accompany the
development of the game. Development process: Video game development process is a software development process as video game is a software with art, audio and gameplay. Formal software development method is often overlooked. Games with poor development methodology are likely to run over budget, have so many bugs. One method applied for game development is agile development which is based on iterative prototyping. This method is in use because most people does not starts with a clear idea. Game development is actually run by overlap of many methods. For example assets development may be done by waterfall method but gameplay design is done by iterative prototyping method. History of game: First video game ever created depends on the definition of video game. First games
had a very little entertaining ability, their focus was separate from user experience. All those game used
mainframe computers to be run, for example OXO written by Alexander S. Douglas in 1952 was the first
game to use a digital display. At 1958 Willy Higinbotham,a physicist working at Brookhaven national
laboratory, created a game called Tennis for two which used a oscilloscope for display. At 1961 a
mainframe computer game was built by a group MIT students led by Steve Russell which was named
Spacewar. Commercially game development started at 1970s. Computer Space was first commercially sold, coin-
operated video game. It used television screen for display and series 74 TTL chips were used for the
computer system. In 1972 first home console was released called Magnavox Odyssye developed by
Ralph H. Baer. Console developer started to work on consoles that were independently able to run
games and used microprocessor. The second-generation console Fairchild Channel F was released first
on 1976.
I –Brook (Volume 1, Issue 3) July – December 2017 |29
With console games in the early 2000s, also mobile games started to gain popularity. However, mobile
games distributed by mobile operators remained a marginal form of gaming until the Apple App Store
was launched in 2008. In 2005 a console game costs $3M to $6M where in
2009 it‘s costs were $6M to $20M. By 2012 it
already reached $66 million. And by then the video
game market were not to be dominated by console
game. Now the fastest growing market is the mobile
game with an average annual rate of 19% for
smartphone and 48% for tablets. Over the past several year the gaming industry has
reached its height. Now there are many companies
like Ubisoft, Electronic Art, Square Enix etc. leading
the market of game development. Some of the very
popular titles over the last decade are Medal of
Honor, Call of duty, Battlefield for war games, then
there is Prince of Persia, Tomb Raider, Assassin‘s
Creed for action adventure game, and Hitman for
stealth game. There are lot more other popular
games that ruled the video game industry. And now
the use of next generation technology has bring the
industry to another height.
Picture Puzzle - Who is the killer?
A lady is found dead in the washroom, out of these 4, who you think
have killed her and why ?
I –Brook (Volume 1, Issue 3) July – December 2017 |30
"WANNACRY says wanna cry??"
Shivam Manna
CSE I – 2nd
year
This wannacry malware is a scary type of trojan virus called ―ransomware.‖
A ransomware is simple software which encrypts and decrypts data based on a condition. Once the
ransomware is loaded onto the computer using vulnerability, it will instantly encrypt and make the
data unusable.It may ask for a password to decrypt. Or, it may show a message communicating the
condition for decryption. It may also ask for payment, it may ask for release of a prisoner, it may ask
for change in politics, anything. Once the condition is met, a password is provided which can be used
to unscramble the information and make it usable again.
The WannaCry ransomware attack was a May 2017 worldwide cyber attack by the WannaCry
ransomware cryptoworm, which targeted Microsoft Windows operating system based computers.The
virus in effect holds the infected computer hostage and demands that the victim pay a ransom(in the
Bitcoin cryptocurrency) in order to regain access to the files on his or her computer.
To say about bitcoin, it is a worldwide accepted currency that is used for digital payments. It is the
first decentralized digital currency. It is an open source software developed by some unknown
people.The system is peer-to-peer, and transactions take place between users directly.
As per record, this attack began on
12th of May 2017 and
approximately affected 230,000
computers across 150 countries
including developed countries like
UK, Spain, Russia, Ukraine, etc
and most part of Europe. Even
under developed country like India
also fall under victim to it.
Wannacry propagates using EternalBlue (a vulnerability in Microsoft‘s software) which was to be
discovered by U.S. National Security Agency and leaked by Shadow Brokers hacker group.But the
vulnerability was patched by Microsoft as soon as it happened. The problem comes from older
versions of Windows or those without Windows Updates, as these were not patched by Microsoft and
were left open to attacks. Russia and India were hit particularly hard because Microsoft‘s Windows
XP-one of the operating systems most at risk- was still widely used in these countries.
How did it work??
Ransomware is a type of cyber attack. For cyber criminals to gain access to the system they need to
download a type of malicious software onto a device within the network. This is often done by getting
a victim to click on a link or download it by mistake. Once the software is on a victim's computer the
hackers can launch an attack that locks all files it can find within a network. This tends to be a gradual
process with files being encrypted one after another. Large companies with sophisticated security
systems are able to spot this occurring and can isolate documents to minimize damage. Individuals
might not be so lucky and could end up losing access to all of their information.
Similarly, in case of wannacry an email containing an attachment is circulated. Upon downloading the
attachment it instantly freezes the system and asks for a payment of $300 in BTC. If not paid within
three days, the payment amount is doubled to $600. After seven days without payment, WannaCry
will delete all of the encrypted files and all data will be lost.
I –Brook (Volume 1, Issue 3) July – December 2017 |31
The wannacry approximately targets and encrypts all windows files including:
.3dm,.3ds,.3g2,.3gp,.602,.7z,.ARC,.PAQ,.
accdb,.aes,.ai,.asc,.asf,.asm,.asp,.avi,.back
up,.bak,.bat,.bmp,.brd,.bz2,.cgm,.class,.c
md,.cpp,.crt,.cs,.csr,.csv,.db,.dbf,.dch,.der,
.dif,.dip,.djvu,.doc,.docb,.docm,.docx,.dot
,.dotm,.dotx,.dwg,.edb,.eml,.fla,.flv,.frm,.
gif,.gpg,.gz,.hwp,.ibd,.iso,.jar,.java,.jpeg,.j
pg,.js,.jsp,.key,.lay,.lay6,.ldf,.m3u,.m4u,.
max,.mdb,.mdf,.mid,.mkv,.mml,.mov,.mp
3,.mp4,.mpeg,.mpg,.msg,.myd,.myi,.nef,.
odb,.odg,.odp,.ods,.odt,.onetoc2,.ost,.otg,.
otp,.ots,.ott,.p12,.pas,.pdf,.pem,.pfx,.php,.
pl,.png,.pot,.potm,.potx,.ppam,.pps,.ppsm,
.ppsx,.ppt,.pptm,.pptx,.ps1,.psd,.pst,.rar,.r
aw,.rb,.rtf,.sch,.sh,.sldm,.sldx,.slk,.sln,.snt,.sql,.sqlite3,.sqlitedb,.stc,.std,.sti,.stw,.suo,.svg,.swf,.sxc,.sx
d,.sxi,.sxm,.sxw,.tar,.tbk,.tgz,.tif,.tiff,.txt,.uop,.uot,.vb,.vbs,.vcd,.vdi,.vmdk,.vmx,.vob,.vsd,.vsdx,.wav,
.wb2,.wk1,.wks,.wma,.wmv,.xlc,.xlm,.xls,.xlsb,.xlsm,.xlsx,.xlt,.xltm,.xltx,.xlw,.zip files.
The following message is shown on the desktop after the data is encrypted:
A 22-year-old security researcher named Marcus Hutchins managed to stop the spread of the attack by
accidentally triggering a "kill switch" when he bought a web domain for less than £10.When the
WannaCry program infects a new computer it contacts the web address. It is programmed to terminate
itself if it manages to get through. When the 22-year-old researcher bought the domain the
ransomware could connect and was therefore stopped.
But later wannacry 2.0 came into began with the absence of killswitch.
So, the spread of WannaCry wasn‘t actually stopped, but instead slowed.
Preventing from ransomwares:
Actually ransomwares can‘t be fully stopped, but their effect can be definitely minimized by:
Keeping your Operating System and antivirus up-to-date.
Regularly back-up your files in an external hard-drive.
Beware of phishing emails, spams, and clicking malicious attachment.
Disable the loading of macros in your Office programs.
Disable your Remote Desktop feature whenever possible.
Use two step authentications.
Use a safe and password-protected internet connection.
Bitcoin
Bitcoin is a digital currency that is not tied to a bank or
government and allows users to spend money anonymously. The
coins are created by users who ―mine‖ them by lending computing power to verify
other users’ transactions. They receive bitcoins in exchange. The coins also can be
bought and sold on exchanges with US dollars and other currencies.
I –Brook (Volume 1, Issue 3) July – December 2017 |32
"BLUE EYES TECHNOLOGY"
Debolina Ghosh
CSE – 3rd
year
The BLUE EYES technology aims at creating computational machines that have perceptual and
sensory ability like those of human beings.
Key features of the system:-
Visual attention monitoring (eye motility analysis).
Physiological condition monitoring (pulse rate, blood oxygenation).
Operator‘s position detection (standing, lying).
Wireless data acquisition using Bluetooth technology.
Real-time user-defined alarm triggering.
Physiological data, operator's voice and overall view of the control room recordingrecorded
data playback.
Part of Blue Eyes technology:-
The main parts in the Blue eye system are
1. Data Acquisition Unit-
Data Acquisition Unit is a mobile part of the Blue eyes system. Its main task is to fetch the
physiological data from the sensor and to send it to the central system to be processed.
2. Central System Unit-
Central System Unit hardware is the second peer of the wireless connection. The box contains a
Bluetooth module and a PCM codec for voice data transmission. The module is
interfaced to a PCusing a parallel, serial and USB cable.
Types of users:-
Users belong to three categories:
• Operators
• Supervisors
• System administrators
Advantages:-
Minimization of ecological consequences financial loss a threat to a human life BlueEyes system
provides technical means for monitoring and recording Human-operator‘s physiological condition.
Disadvantages:-
Doesn‘t predict nor interfere with operator‘s thoughts.
Cannot force directly the operator to work.
Applications:-
1.It can be used in the field of security & controlling, where the contribution of human
operator required in whole time.
I –Brook (Volume 1, Issue 3) July – December 2017 |33
2.Engineers at IBM's office:smarttags Research Center in San Jose, CA, report that a number of
largeretailers have implemented surveillance systems that record and interpret customer movements,
using software from Almaden's Blue Eyes research project. Blue Eyes is developing ways for
computers toanticipate users' wants by gathering video data on eye movement and facial expression.
Your gaze might rest on a Web site heading, for example, and that would prompt your computer to
find similar links and to call them up in a new window. But the first practical use for the research
turns out to besnooping on shoppers.
3.Another application would be in the automobile industry. By simply touching a computer input
device such as a mouse, the computer system is designed to be able to determine a person's
emotionalstate.
CONCLUSION:-
Blue Eyes need for a real-time monitoring system for a human operator. The approach
is innovative since it helps supervise the operator not the process, as it is in presently
available solutions. This system in its commercial release will help avoid potential threats
resulting from human errors, such asweariness, oversight, temporal indisposition.In future it is
possible to create a computer which can interact with us as we interact each other with theuse of Blue
Eyes technology.
Characteristics & personality traits of people with blue eyes
They have higher pain tolerance.
Good strategic thinkers.
They have slow reflexes.
They might do better academically.
They are more sensitive to light.
They have lots of energy.
They might be less agreeable.
They might be more competitive.
They might be cautious.
They might be shy.
They are physically strong.
I –Brook (Volume 1, Issue 3) July – December 2017 |34
"Basic Concept of Network Security & Attacks"
Chiranjit Das
CSE – 3rd
year
Network security is any activity designed to protect the usability and integrity of your network and data. It
includes both hardware and software technologies. Effective network security manages access to the
network. It targets a variety of threats and stops them
from entering or spreading on your network.
Network security combines multiple layers of
defenses at the edge and in the network. Each
network security layer implements policies and
controls. Authorized users gain access to network
resources, but malicious actors is blocked from
carrying out exploits and threats. Digitization has
transformed our world. How we live, work, play, and learn have all changed. Every organization that
wants to deliver the services that customers and employees demand must protect its network. Network
security also helps you protect proprietary information from attack. Ultimately it protects your reputation.
Let us go through the basics of Networks, its security and various attacks!!
What is Network?
In a simple language a Computer Network is a collection of computer/devices also known as a nodes
which are connected to each other in a certain pattern
What is network Security?
Network Security is how to provide the security for the
data over a network.
Basic technology used in network security:
1. PLAIN TEXT (which is readable format)
2. CIPHER TEXT (which is non readable format)
ENCRYPTION
The process of convert plain text to cipher text is called Encryption. The study of Encryption called
CRYPTOGRAPHY.
Encryption can be done by two ways
i. Stream Cipher
It is done by bit by bit. That means the process of convert ion take place bit by bit. This is valid only the
short length messages.
I –Brook (Volume 1, Issue 3) July – December 2017 |35
ii. Block Cipher
Here the block is nothing but a group of bit. So there is a size for each block. Here the process will be
done by block by block.
Encryption can be done by two mechanisms
i. Symmetric Encryption
Here we have conceder a key, and same key must be used in encryption and decryption. Here the key is
called Secret key. It‘s denoted by ( ks ).
ii. Asymmetric Encryption
Here two different and impendent keys are
use. These two independents key from a
pair. So it‘s called a pair of keys.
1. Public keys (kU)
2. Privet keys (kR)
Every user will be having pair of keys.
Here if one key will be used in Encryption process then the other keys will be used in Decryption.
DECRYPTION
The study of convert Cipher Text to Plain Text is called Decryption. The study of Decryption is called
CRYPTANALYSIS.
CRYPTOLOGY
The study of Both the ENCRYPTION and DECRYPTION called as CRYPTOLOGY.
That means both the CRYPTOGRAPHY and CRYPTANALYSIS called as CRYPTOLOGY,
KEY
Key is a group of bit which has a major roll of process of ENCRYPTION & DECRYPTION.
Why Network Security is needed?
Network security has become one of the most
important factors for companies to conceder. Big
enterprise like Microsoft are designing and building
software product that need to be protected against
foreign attacks.
By increasing network security we decrease the
chance of privacy spoofing identity or information
theft and so on..
I –Brook (Volume 1, Issue 3) July – December 2017 |36
ATTACKS
In simple word an attack in Network Security Means gaining the access of data by unauthorized user.
Here the gaining means
1. Accessing Data
2. Modifying Data
3. Destroying Data
The Attacks are two types
i. Passive Attack
ii. Active Attack
Passive Attack
Here in the data no modification will be done. Just they will (unauthorized user) access the data.
It is of two types
1. Eavesdropping - Here only the content is released.
2. Traffic Analysis - Here also sender sends the receiver and the third party observes the traffic
flow. Based upon the observation of traffic flow third party access the data.
Active Attacks
Here the data modification will be done.
I –Brook (Volume 1, Issue 3) July – December 2017 |37
Active attacks are four types
1. Masquerade Attacks
Here the data will received by the third party by the name
of sender.
2. Replay Attacks
Here the Receiver received the message by a sender and
also the same messages by the third party. So the receiver
received the same message twice.
3. Data Modification
Here sender send the messages for receiver but its received
by the third party. And third party modifies the data then it
will be received by the receiver.
4. Denial of Service
Here the third part Disrupt the services send by the server for
sender. Here sender not received the data send by sender.
I have given you a starting word (a clue). Your job is to fill in the blanks with a 4-letter word
that matches the clue already given. This 4-letter word must complete the 7-letter word next to
it. Have fun!
1. Therefore = ? = V_ _ TI _ _
2. Whirl = ? = A _ _ IR _ _
3. Demeanour = ? = A _ B _ _ _ T
4. Shoestring = ? = G _ _ _ I _ R
I –Brook (Volume 1, Issue 3) July – December 2017 |38
"DIGITAL JEWELRY"
Sumana Paul
CSE - 3rd
year
Mobile computing is beginning to break the chains that tie us to our desks, but many of today‘s mobile
devices can still be a bit awkward to carry around. In the next age of computing, there will be an
explosion of computer parts across our bodies, rather than across our desktops. Jewelry is worn for many
reasons – for aesthetics, to impress others, or as a symbol of affiliation or commitment. Basically, jewelry
adorns the body and has a very little practical purpose. The combination of microcomputer devices and
increasing computer power has allowed several companies to begin producing fashion jewelry with
embedded intelligence i.e. Digital jewelry.
What is a Digital jewelry?
Digital jewelry is the fashion jewelry with embedded intelligence. It can best be defined as wireless,
wearable computers that allow you to communicate by ways of e-mail, voicemail, and voice
communication.
In this post, we shall go through how various computerized jewelry (like earrings, necklace, ring, bracelet,
etc.,) will work with mobile embedded intelligence.
Introduction
The latest computer craze has been to be able to wear wireless computers. Best examples are Red Tacton
technology, wearable biosensors, smart watches etc. The ―Digital Jewelry‖ looks to be the next sizzling
fashion trend of the technological wave. In the next wave of mobile computing devices, our jewelry might
double as our cell phones, personal digital assistants (PDAs) andGPS receivers.
The combination of shrinking computer
devices and increasing computer power has
allowed several companies to begin producing
fashion jewelry with embedded intelligence.
Today, manufacturers can place millions of
transistors on a microchip, which can be used
to make small devices that store tons of digital
data. Digital Jewelry appears to be one of the
biggest growing promotions of its time.
Imagine being able to email your boss just by
talking into your necklace. The whole concept
behind this is to be able to communicate to
others by means of wireless appliances. The
other key factor of this concept market is to
stay fashionable at the same time.
Digital jewelry‚ can help you solve problems
like forgotten passwords and security badges.
These devices have a tiny processor and unique identifiers that interact with local sensors. Digital
jewelry‚ is a nascent catchphrase for wearable ID devices that contain personal information like
passwords, identification, and account information. They have the potential to be all-in-one replacements
for your drivers‘ license, key chain, business cards, credit cards, health insurance card, corporate security
badge, and loose cash. They can also solve a common dilemma of today‘s wired world the forgotten
password.
I –Brook (Volume 1, Issue 3) July – December 2017 |39
How does Digital Jewelry work?
Soon, cell phones will take a totally new form, appearing to have no form at all. Instead of one single
device, cell phones will be broken up into their basic components and packaged as various pieces of
digital jewelry or other wearable devices. Each piece of jewelry will contain a fraction of the components
found in a conventional mobile phone. Together, the digital-jewelry cell phone should work just like a
conventional cell phone.
The various components that are inside a cell phone are Microphone, Receiver, Touchpad, Display,
Circuit Board, Antenna, Battery.
IBM has developed a prototype of a cell phone that consists of several pieces of digital jewelry that will
work together wirelessly, possibly with Bluetooth wireless technology, to perform the functions of the
above components.
Here are the pieces of computerized-jewelry phone and their functions:
Earrings – Speakers embedded into these earrings
will be the phone‘s receiver.
Necklace – Users will talk into the necklace‘s
embedded microphone.
Ring – Perhaps the most interesting piece of the
phone, this ―magic decoder ring‚ is equipped with
light-emitting diodes (LEDs) that flash to indicate an
incoming call. It can also be programmed to flash
different colors to identify a particular caller or
indicate the importance of a call.
Bracelet – Equipped with a video graphics array
(VGA) display, this wrist display could also be used as a caller identifier that flashes the name and
phone number of the caller.
With a jewelry phone, the keypad and dialing function could be integrated into the bracelet, or else
dumped altogether – it‘s likely that voice-recognition software will be used to make calls, a capability that
is already commonplace in many of today‘s cell phones. Simply say the name of the person you want to
call and the phone will dial that person. IBM is also working on a miniature rechargeable battery to power
these components.
In addition to changing the way we make phone calls, digital jewelry will also affect how we deal with
the ever-increasing bombardment of e-mails. Imagine that
the same ring that flashes for phone calls could also
inform you that e-mail is piling up in your inbox. This
flashing alert could also indicate the urgency of the e-
mail. Two of the most identifiable components of a
personal computer are the mouse and monitor. These
devices are as familiar to us today as a television set.
The mouse-ring that IBM is developing will use the
company‘s Track Point technology to wirelessly move the
I –Brook (Volume 1, Issue 3) July – December 2017 |40
cursor on a computer monitor display. You‘re probably most familiar with Track Point as the little button
embedded in the keyboard of some laptops. IBM Researchers have transferred Track Point technology to
a ring, which looks something like a black pearl ring. On top of the ring is a little black ball that users will
swivel to move the cursor, in the same way, that the Track Point button on a laptop is used.
This Track Point ring will be very valuable when monitors shrink to the size of the watch face. In the
coming age of ubiquitous computing, displays will no longer be tied to desktops or wall screens. Instead,
you‘ll wear the display like a pair of sunglasses or a bracelet. Researchers are overcoming several
obstacles facing these new wearable displays, the most important of which is the readability of
information displayed on these tiny devices.
Charmed Technology is already marketing its digital jewelry, including a futuristic-looking eyepiece
display. The eyepiece is the display component of the company‘s Charmed Communicator, a wearable,
wireless, broadband-Internet device that can be controlled by voice, pen or handheld keypad. The
Communicator can be used as an MP3 player, video player and cell phone. The Communicator runs on
the company‘s Linux-based Nanix operating system.
Similar Designs available:
1. Garnet-Ring:
The picture shown above is a ring containing a microprocessor. It vibrates to let
you know that you have received a message from someone.
2.The Java Ring:
It seems that everything we access today is under lock and key. Even
the devices we use are protected by passwords. It can be frustrating
trying to keep with all of the passwords and keys needed to access any
door or computer program. Dallas Semiconductor is developing a new
Java-based, computerized ring that will automatically unlock doors and
log on to computers.
The Java Ring, first introduced at Java One Conference, has been tested
at Celebration School, an innovative K-12 school just outside Orlando,
FL. The rings given to students are programmed with Java applets that communicate with host
applications on networked systems. Applets are small applications that are designed to be run within
another application. The Java Ring is snapped into a reader, called a Blue Dot receptor, to allow
communication between a host system and the Java Ring.
The Java Ring is a stainless-steel ring, 16-millimeters (0.6 inches) in diameter, which houses a 1-million-
transistor processor, called an iButton. The ring has 134 KB of RAM, 32 KB of ROM, a real-time clock
and a Java virtual machine, which is a piece of software that recognizes the Java language and translates it
for the user‘s computer system.
Conclusion
The use of wearable devices has been growing enormously in today‘s world. When you compare the size
of electronics devices today with that of what it was ten years back, you can think about the kind of
advancements happened in the world of technology. It may happen that by the end of the decade, we
could be wearing our computers instead of sitting in front of them. Digital jewelry, designed to
I –Brook (Volume 1, Issue 3) July – December 2017 |41
supplement the personal computer, will be the evolution in digital technology that makes computer
elements entirely compatible with the human form.
Jewellery trivia!!
Jewels and other decorative items are old as the human race itself.
Diamonds were first discovered in India, over 2400 years ago. The biggest
modern supplier of diamonds is South America.
Tradition of giving fiancée engagement ring was introduced by Maximilian of Austria in 1477.
He gave his soon-to-be wife Mary of Burgundy masterfully crafted ring as a promise of marriage.
Egypt and Mesopotamia were the first two ancient civilizations that started organized production
of jewelry. Their accomplishments in advancement of metallurgy and gem collecting played
important role for development of jewelry in every civilization that came after them.
The largest diamond that was ever found is "The Cullinan". It weights staggering 1.3 pounds.
Throughout the history, jewelry went through many changes brought by rise and fall of many
civilizations and fashion changes.
Some of the most notable fashion styles that affected jewelry production are Victorian,
Romanticism, Art Deco, Art Nouveau, Renaissance, and many more.
The most important quality of emerald, sapphire and ruby is their color clarity.
Only one in a million of mined diamonds ends up in jewelry.
Famous jewelry material Black Jet that was popularized during the reign of Queen Victoria is
made from fossilized coal formed over 180 million years ago.
Most pearls made today are cultured, or man-made. This process is done by inserting a small
shell into oyster, who then painstakingly covers it with pearl material over a minimum of three
years.
In ancient times term Sapphire described all blue
stones. Similarly, all yellow stones were called
Topaz.
United States is world's biggest consumer of
diamonds.
Diamonds are all over 3 billion years old, and they
formed from carbon that was heated and
compressed into diamond form at the depth of 100
miles below the surface of the earth.
Gold is one of the most popular jewelry raw materials because of its shine, longevity and
softness.
Silver was used as a jewelry material for over 6 thousand years.
I –Brook (Volume 1, Issue 3) July – December 2017 |42
"DNA Storage"
Anish Majumdar
CSE – 3rd
year
Introduction -
DNA also known as Deoxyribonucleic Acid is
a molecule that carries the genetic instructions used in
the growth, development, functioning
and reproduction of all known living organisms and
many viruses. DNA and ribonucleic acid (RNA)
are nucleic acids; alongside proteins, lipids and complex
carbohydrates (polysaccharides), they are one of the four
major types of macromolecules that are essential for all
known forms of life.
Now you will surprised to know that the scientists are using this DNA for making storage devices to store
digital data. Though the technology is in experimental stage, still it is attracting many young
technologists, scientists and investors to invest in the development of this technology.
History –
The idea and the general considerations about the possibility of recording, storage and retrieval of
information on DNA molecules were originally made by Mikhail Neiman and published in 1964–65 in
the Radiotekhnika journal, USSR, and the technology may therefore be referred to as MNeimONics,
while the storage device may be known as MNeimON (Mikhail Neiman OligoNucleotides).
Among early examples of DNA data storage, in 2007 a device was created at the University of Arizona,
using addressing molecules to encode mismatch sites within a DNA strand. These mismatches were then
able to be read out by performing a restriction digest, thereby recovering the data. This system has a
number of advantages over other methods. Firstly, unlike other methods in which bespoke molecules are
synthesized for each new DNA encoding, a common set of molecules could be used to encode any
arbitrary data. DNA synthesis is currently expensive, and laborious, so this means that this investment can
be used to encode many different sets of data, using the same set of DNA molecules. The encoded DNA
created here is also "bio-compatible", meaning that, in principle it can be readily inserted into, and
propagated within, an organism.
On August 16, 2012, the journal Science published research by George Church and colleagues at Harvard
University, in which DNA was encoded with digital information that included an HTML draft of a 53,400
word book written by the lead researcher, eleven JPG images and one JavaScript program. Multiple
copies for redundancy were added and 5.5 petabits can be stored in each cubic millimeter of DNA. The
researchers used a simple code where bits were mapped one-to-one with bases, which had the
shortcoming that it led to long runs of the same base, the sequencing of which is error-prone.
This research result showed that besides its other functions, DNA can also be another type of storage
medium such as hard drives and magnetic tapes.
I –Brook (Volume 1, Issue 3) July – December 2017 |43
Basic working principle –
First, we have to convert the digital code of 1‘s and 0‘s to a genetic code of A‘s, C‘s, T‘s, and G‘s, then
take this lowly text file and manually construct the molecule it represents. Each of these is a feat in and of
itself. DNA storage requires cutting-edge techniques in data compression and security to design a
sequence both info-dense enough to realize DNA‘s potential and redundant enough to allow robust error-
checking to improve the accuracy of information retrieved down the line.
DNA could take the volume of data
contained in about a hundred
industrial data centers and store it
in a space roughly the size of a shoe
box.
DNA achieves this in two ways.
One, the coding units are very
small, less than half a nanometer to
a side, where the transistors of a
modern, advanced computer
storage drive struggle to beat the 10
nanometer mark. But the increase
in storage capacity isn‘t just ten- or a hundred-fold, but thousands-fold. That differential arises from the
second big advantage of DNA: it has no problem packing three-dimensionally.
See, transistors are generally aligned on a flat plane, meaning their ability to fully use a given space is
pretty low. We can of course stack many such flat boards one atop another, but at that point a new and
totally debilitating problem arises: heat. One of the most challenging parts of designing new transistor-
based technologies, whether they‘re processors or storage devices, is heat. The more tightly you pack
silicon transistors, the more heat you‘ll create, and the harder it will be to ferry that heat away from the
device. This both limits the maximum density, and requires that we supplement the cost of the drives
themselves with expensive cooling systems.
With its super-efficient packing structure, the DNA double helix offers a great solution. Chromatin, the
DNA-protein system that makes up chromosomes, is essentially a very complex mechanism designed to
allow an inherently sticky molecule like DNA to roll up really tight, yet still unroll quickly and easily
later on, when certain patches of DNA are needed by the body.
This at-hand nature of the chromatin system, which allows any gene to be ―called‖ from any part of the
genome with roughly equal efficiency, has led the researchers to dub their storage system a DNA version
of a computer‘s random access memory, or RAM. Like RAM, the physical location of a piece of data
within the drive isn‘t important to the computer‘s ability to access that information. That‘s because the
incredible abilities of evolution‘s data storage solution were tailored to evolution‘s unique needs, and
those needs don‘t necessarily include performing thousands of ―reads‖ per second. Regular, cellular DNA
data storage has to untangle the complex chromatin structure of stable DNA, then unwind the DNA
double helix itself, make a copy of the sequence of interest, then zip everything right back up the way it
was — it takes a while.
I –Brook (Volume 1, Issue 3) July – December 2017 |44
Latest news and researches on DNA storage –
Microsoft Has a Plan to Add DNA Data Storage to Its Cloud
Based on early research involving the storage of movies and documents in DNA, Microsoft is developing an apparatus that uses biology to replace tape drives, researchers at the company say.
Computer architects at Microsoft Research say the company has formalized a goal of having an
operational storage system based on DNA working inside a data center toward the end of this decade. The
aim is a ―proto-commercial system in three years storing some amount of data on DNA in one of our data
centers, for at least a boutique application,‖ says Doug Carmean, a partner architect at Microsoft
Research. He describes the eventual device as the size of a large, 1970s-era Xerox copier.
Internally, Microsoft harbors the even more ambitious goal of replacing tape drives, a common format
used for archiving information. ―We hope to get it branded as ‗Your Storage with DNA,‘‖ says Carmean.
The plans signal how seriously some tech companies are taking the seemingly strange idea of saving
videos, photos, or valuable documents in the same molecule our genes are made of.
Two items of music anthology now stored for eternity in DNA
Thanks to an innovative technology for encoding data in DNA strands, two items of world heritage –
songs recorded at the Montreux Jazz Festival and digitized by EPFL – have been safeguarded for eternity.
This marks the first time that cultural artifacts granted UNESCO heritage status have been saved in such a
manner, ensuring they are preserved for thousands of years. The method was developed by US company
Twist Bioscience and is being unveiled today in a demonstrator created at the EPFL+ECAL Lab.
"Tutu" by Miles Davis and "Smoke on the Water" by Deep Purple have already made their mark on music
history. Now they have entered the annals of science, for eternity. Recordings of these two legendary
songs were digitized by the Ecole Polytechnique Fédérale de Lausanne (EPFL) as part of the Montreux
Jazz Digital Project, and they are the first to be stored in the form of a DNA sequence that can be
subsequently decoded and listened to without any reduction in quality.
This feat was achieved by US company Twist Bioscience working in association with Microsoft Research
and the University of Washington. The pioneering technology is actually based on a mechanism that has
been at work on Earth for billions of years: storing information in the form of DNA strands. This
fundamental process is what has allowed all living species, plants and animals alike, to live on from
generation to generation.
I –Brook (Volume 1, Issue 3) July – December 2017 |45
"FORTRAN"
Govind Kumar Prajapati
CSE- 3rd
year
FORTRAN, in full Formula Translation, computer-programming language created in 1957 by John
Backus that shortened the process of programming and made computer programming more accessible.
The creation of FORTRAN, which debuted in 1957, marked a significant stage in the development of
computer-programming languages. Previous programming was written in machine (first-generation)
language or assembly (second-generation) language, which required the programmer to write instructions
in binary or hexadecimal arithmetic. Frustration with the arduous nature of such programming led Backus
to search for a simpler, more accessible way to communicate with computers. During the three-year
development stage, Backus led an eclectic team of 10 International Business Machines (IBM) employees
to create a language that combined a form of English shorthand with algebraic equations.
FORTRAN enabled the rapid writing of computer programs that ran nearly as efficiently as programs that
had been laboriously hand coded in machine language. As computers were rare and extremely expensive,
inefficient programs were a greater financial problem than the lengthy and painstaking development of
machine-language programs. With the creation of an efficient higher-level (or natural) language, also
known as a third-generation language, computer programming moved beyond a small coterie to include
engineers and scientists, who were instrumental in expanding the use of computers.
By allowing the creation of natural-language programs that ran as efficiently as hand-coded ones,
FORTRAN became the programming language of choice in the late 1950s. It was updated a number of
times in the 1950s and 1960s in order to remain competitive with more contemporary programming
languages. FORTRAN 77 was released in 1978, followed by FORTRAN 90 in 1991 and further updates
in 1996 and 2004. However, fourth- and fifth-generation languages largely supplanted FORTRAN
outside academic circles beginning in the 1970s.
FORTRAN vs C++
As so often, the choice depends on the problem you are trying to solve, the skills you have, and the
people you work with (unless it's a solo project). I'll leave the 3rd
condition aside for the moment because
it depends on everyone's individual situation.
Problem dependence: Fortran excels at array processing. If your problem can be described in terms of
simple data structures and in particular arrays, Fortran is well adapted. Fortran programmers end up using
arrays even in non-obvious cases (e.g. for representing graphs). C++ is better suited for complex and
highly dynamic data structures.
Skill dependence:It takes a lot more programming experience to write good C++ programs than to write
good Fortran programs. If you start out with little programming experience and only have so much time
to learn that aspect of your job, you probably get a better return on investment learning Fortran than
learning C++. Assuming, of course, that your problem is suited to Fortran.
I –Brook (Volume 1, Issue 3) July – December 2017 |46
"INTERNET OF THINGS (IoT)"
Sudipta Das
CSE – 3rd
year
‗SMART‘ is one of the most well-known terms in the world today. As the days are passing, due to huge
development of technology, necessary non living things are also becoming smart besides the human
being. And behind this smart life, one of the most vital role players is ‗Internet of Things (IoT)‘. IoT is
a network of physical devices, vehicles and other items embedded with electronic sensors, actuator and
software. This network connectivity enables objects to collect and exchange data. Nowadays
thermostats, lights, refrigerators, cars etc. can all be connected to IoT. The applications of IoT are as
follows:
On Our Body:
A regular body check-up.
Tracking of our activity level.
Amount of calorie burnt in a day etc.
Home:
Making sure that our gas-oven is off.
Tracking down the lost keys.
Optimizing of power consumption.
Keeping on the plants alive by taking care of them and by giving reminder about their
present condition etc.
Industry:
Optimization of operation and boosting of productivity.
Saving of resources and costs.
Proper maintenance and repair.
Taking safety measures.
Environment:
Automatic streets clearance.
Tracking of water information.
Efficient use of electricity.
Amongst all the applications, one of the most fascinating applications is the driverless car. Driverless
Car!! Sounds really amazing. As the name suggests, it is an automated vehicle which is able to navigate
to a predetermined destination without an y human intervention. Sometimes also known as ‗self driving
car‘, ‗automated car‘ or an autonomous vehicle. Do you know Leonardo Da Vinci designed the first
prototype of this car in 1478? Unbelievable right! In June, 2011, Nevada, US became the first
jurisdiction in the world to allow driverless cars on public roadways, though it is not legal on most
roads.
I –Brook (Volume 1, Issue 3) July – December 2017 |47
Driverless Cars
Now a burning question comes to our mind that how a driverless car works. Let‘s know about this. It
ferries to people from one place to another without any user interaction. The car is summoned by a smart
phone for pick up at the user ‘s locations with the destination set. It is powered by an electric motor with
around a 100 mile range and uses a combination of sensors and software to locate itself in the real
world, combined with highly accurate digital maps. The software can recognize objects, people, car,
road marking, science and traffic lights, obeying the rules of the road. It can detect road works and safely
navigate around them as well. A GPS is used to get a rough location of the car. No steering wheel or
manual control, simply a Start button and a big Red emergency button are present. In front of the
passengers, there is a small screen showing the weather, the current speed and a small countdown
animation to launch. Once the journey is done, the small screen displays a message to remind the
passenger to take his personal belongings.
Companies developing and/or testing driverless cars include Audi, BMW, Ford, General Motors,
Volkswagen and Volvo. Google‘s test involved a fleet of self driving cars- six Toyota Prii and an Audi
TT- navigating over 140,000 miles of California streets and highways.
So, IoT is very quickly becoming a reality. We see that each year, a greater number of everyday devices
suddenly become ‗smart‘ like smart phones, smart TV, smart home, smart kitchen etc. IoT provides
large platform of research as well as business too. Actually, ‗Internet of Things‘ is already here, but still
a long miles to go.
I –Brook (Volume 1, Issue 3) July – December 2017 |48
"Paper battery"
Megha Biswas
CSE – 3rd
year
A paper battery is a flexible, ultra-thin energy storage and production device formed by
combining carbon nanotubes with a conventional sheet of cellulose-based paper. A paper battery acts as
both a high-energy battery and super capacitor, combining two components that are separate in traditional
electronics.
The functioning of paper Batteries is similar to that of a normal chemical battery.
In normal cases, conventional batteries may be easily damaged by corrosion and also sometimes they
required a bulky housing.
But the paper batteries are non-corrosive, non-toxic and light-weight than the normal batteries.
Paper Battery= Carbon Nanotubes + Cellulose (Paper).
INVENTION
In December 2009 at Stanford University, Yi Cui and his research team successfully invented the original
working prototype that provides 1.5 V as its terminal voltage. A paper battery is an ultra-
thin,environmentally friendly and flexible energy storage battery made of carbon nano tubes and paper or
cellulose.
Paper Battery Construction
The first method involves fabricating zinc and manganese dioxide based cathode and anode. The
batteries are printed onto paper using standard
silkscreen printing press. This paper is infused
with aligned carbon nanotubes which are used as
electrode. This paper is dipped in a solution of
ionic liquid which acts as the electrolyte.
The second method is a bit complex and
involves growing nanotubes on a silicon
substrate. The gaps in the matrix are then filled
with cellulose and once the matrix is dried, the
combination of cellulose and nanotubes is
peeled off. Thus sheets of paper consisting of
layers of carbon nanotubes are created. Two such sheets are combined together to form a super
capacitor with a ionic liquid like human blood, sweat or urine being used an electrolyte.
I –Brook (Volume 1, Issue 3) July – December 2017 |49
The third is a simple method and can be constructed in a laboratory. It involves spreading a
specially formulated ink of carbon nanutubes over a rectangular sheet of paper coated with an
ionic solution. A thin film of lithium is then laminated on the other side of the paper. Aluminium
rods are then connected to carry current between the two electrodes.
The fourth method involves coating substrate of stainless steel with carbon nanotubes. The coated
substrate is the dried at 80 degree Celsius for five minutes, after which the material is peeled off.
A pair of films are used for each paper battery with each film being pasted to different
electrolytes like LTO and LCO. A paper is then sandwiched between the two films using glue.
Working Principle
Internal operation of paper batteries is
similar to that of conventional batteries with
each battery generating about 1.5V.
If one can recall traditional batteries work in
the manner where positive charged particles
called ions and negative charged particles
called electrons move between positive
electrodes called anode and negative
electrode called cathode. Current flows as electrons flow from anode to the cathode through the
conductor, since the electrolyte is an insulator and doesn‘t provide a free path for electrons to
travel.
Similarly in some paper batteries, carbon nanotubes act as cathode, the metal is the anode
and paper is the separator.
Chemical reaction between metal and electrolyte results in production of ions whereas chemical
reaction between carbon and electrolyte results in production of electrons. These electrons flow
from the cathode to the anode through the external circuit.
Need for Paper Battery
Theordinary Electro-Chemical battery faces many problems like:
Limited life time: The primary batteries can‘t be recharged like secondary batteries. They
irreversibly convert chemical energy into the electrical energy. Although the secondary batteries
may be rechargeable, the life time may be very short and also theyare very costlier than the
primary ones. The paper battery provides a better advantage of all these problems.
I –Brook (Volume 1, Issue 3) July – December 2017 |50
Environmental Influence: The extensive use of batteries can generate environmental pollutions
like toxic metal pollutions etc. But the Paper batteries are environmentally friendly and can
decompose very easily without any abuse.
Leakage: If by chance any leakage of batteries occurred, the chemical released may be very
dangerous to the environment and also to the nearby metals which are in contact with the
batteries. But there is no toxic chemical in the paper batteries.
Uses
Paper Battery can shows favorable for applications where size and portability is the major
necessity.
Most modern electronic devices like digital watches, smart cards etc. facilitate the necessity of
ultra-thin batterieswhich are nontoxic, flexible and long lasting. The Paper battery can be rolled,
twisted, folded and even cut into yourdesired shape and size without any drop in its efficiency.
Paper Battery can be now implemented in wearable technology like Google Glass, Wearable
Biosensors, and Wearable computer etc.Used in entertainment devices.Used in tags and smart
cards.
For medical applications like disposable medical diagnostic devices and also can be used in
pacemakers due to the paper batteries nontoxic and biodegradable nature.
Ideal for aircraft, automobiles, remote controllers etc.
Advantages of Paper Battery
Paper battery can be used as both super capacitor and battery.
Paper batteries are very flexible, ultrathin, nontoxic and biodegradable battery Long life. It
provides a steady power.
It can be available in different shapes and sizes.They offer high energy efficiency.
Paper Batteries are low cost and can be easily disposed.
They can be used to produce 1.5V energy and also paper batteries are rechargeable.
Limitations of Paper Batteries
The construction of carbon nano tubes used in the paper battery is very expensive. There are different
techniques are used likeChemical Vapor Deposition (CVD), Arc discharge, Electrolysis, Laser Ablation
etc.
If we inhaled the paper battery, they start interacting with the Microphages present in the lungs.
This is very similar to the same of Asbestos fibers, so it will be very hazardous for the health of humans.
I –Brook (Volume 1, Issue 3) July – December 2017 |51
"PILL CAMERA"
Disha Mukherjee
CSE – 3rd
year
A pill camera is a piece of equipment used for a procedure
known as capsule endoscopy. It was developed in the late
20th century and was approved for use by the FDA in
2001.
The Aim Of Technology
The aim of technology is to produce products on a large
scale for cheaper prices and increased quality. The present
technologies have obtained a part of it, but the manufacturing technology is at the macro level. There is a
device named as Diagnostic Imaging System that comes with the pill cam which is mainly used for the
treatment of cancer, ulcer and anemia. It has made revolution in the field of medicine.
Description
The camera is about 1 inch long and one-half inch in diameter, with rounded edges making it shaped like
a drug capsule (although slightly larger). It is comprised of a camera, flash, plastic capsule, batteries and
transmitter.
The latest pill camera is sized 26*11 mm and is capable of transmitting 50,000 color images during its
traversal through the digestive system of patient. It is small enough to be swallowed.
Inside a capsule camera
Optical Dome
Lens Holder
Lens
Illuminating LED‘s
CMOS Image Sensor
Battery
ASIC Transmitter
Antennae
Working Of A Pill Camera
The capsule is swallowed by the patient
and it is propelled forward by the natural muscular waves of the digestive tract into the small
intestine via the large intestine.
The pill camera takes two photos in a second while passing through the digestive tract,
approximately 2,600 high quality images
I –Brook (Volume 1, Issue 3) July – December 2017 |52
The images are transmitted by the capsule to a data recorder, that is worn by the patient on a belt
around its waist. The patient can work as usual as the normal day after swallowing the pill
camera.
The stored data is transferred to the physician to its computer for further analysis. Normally, the
process takes around eight hours to complete. According to the study, the pill camera is safe to
use and don‘t have any side effects.
Uses
Crohn‘s Disease.
Malabsorption Disorders.
Tumors of the small intestine & Vascular Disorders.
Ulcerative Colitis
Medication Related To Small Bowel Injury.
Advantages of Pill Camera
Biggest impact on the medical industry.
Nanorobots can perform delicate surgeries.
They can also change the physical appearance.
They can slow or reverse the aging process.
Used to shrink the size of components.
Nanotechnology has the potential to have a positive effect on the Environment.
Major drawbacks of the pill camera:
The pill camera can transmit images from inside to outside the body. Consequently, it becomes
impossible to control the camera behavior, including the on/off power functions and effective
illuminations inside the intestine.
It is risky to try this procedure on the patients having gastrointestinal structures because of the
obstruction risk. Also, there is a chance that the pill camera may not be able to traverse inside the
digestive system in a free manner.
I –Brook (Volume 1, Issue 3) July – December 2017 |53
If there is a partial obstruction in the patient‘s small intestine, then there is a risk of the pill
getting stuck there and the patient may have an intestinal obstruction and end up in the emergency
room.
Important facts about the pill camera
The pill camera normally sizes same as a multi-vitamin tablet.
More than half of the pill capsule is filled with the batteries.
The computer software program is used by the hospitals to speed up the viewing of the video.
There is a tiny Perspex dome installed over the lens to make sure that all the images should be
taken in focus.
The normal cost for this type of procedure is around £1,000 that includes the cost of the pill-cam.
The pill camera was first developed by the Given Imaging Ltd., an Israeli Company.
I –Brook (Volume 1, Issue 3) July – December 2017 |54
"SPACEX BFR- ANYWHERE ON EARTH IN UNDER AN HOUR"
Sagar Prasad
CSE- 3 rd year
What is SpaceX BFR? BFR is
a new way to move people and
things at rocket
speeds(17000km/h) for the price
of a airline ticket. It‘s low-cost,
reusable and earth to earth or
earth to another planet high
speed rocket transportation
system. It‘s like broadband for
transportation.
The BFR, which either stands
for Big Falcon Rocket,
announced in September 2017,
is the code name for SpaceX's privately-funded launch vehicle, spacecraft and space and ground
infrastructure system of spaceflight technology—including reusable launch vehicles and spacecraft—that
is intended by the company to replace all SpaceX's existing launch vehicles and spacecraft by the early
2020s. The system includes Earth infrastructure for rapid launch and relaunch; low Earth orbit, and zero-
gravity propellant transfer technology. The new vehicle, while smaller than an earlier version of
SpaceX composite material vehicle design, is much larger than the existing SpaceX operational vehicles
which it is intended to replace.
The new launch vehicle is
planned to replace
both Falcon 9 and Falcon
Heavy launch vehicles and
the Dragon spacecraft, in the
operational SpaceX fleet in
the early 2020s, initially
aiming at the Earth-
orbit market, but explicitly
adding substantial capability
to the spacecraft vehicles to
support long-duration
spaceflight in
the cislunarand Mars missio
n environment as well.
SpaceX intends this
approach to bring significant cost savings which will help the company justify the development expense
of designing and building the new launch vehicle design. BFR is a 9 meters (30 ft)-diameter launch
vehicle. The BFR also has the capabilities to travel to Venus.
An earlier larger design for the first non-Falcon launch vehicle from SpaceX was known as the ITS
launch vehicle in 2016–2017. The design for all of the ITS vehicles were 12 meters (39 ft)
diameter. While the earlier SpaceX designs had been aimed at Mars transit and other interplanetary uses,
SpaceX pivoted in 2017 to a plan that would replace all SpaceX launch-service-provider capacity—Earth
I –Brook (Volume 1, Issue 3) July – December 2017 |55
orbit, the Lunar-orbit region, and interplanetary space transport—with a single 9 m (30 ft)-diameter class
of launch vehicles and spacecraft.
Development work began on the Raptor rocket engines to be used for both stages of the BFR launch
vehicle in 2012, and engine testing began in 2016. New rocket engine designs are typically considered
one of the longest of the development subprocesses for new launch vehicles and spacecraft. Tooling for
the main tanks has been
ordered and a facility to build
the vehicles is under
construction; construction will
start on the first ship in
2Q2018. The company publicly
stated an aspirational goal for
initial Mars-bound cargo flights
of BFR launching as early as
2022, followed by the first BFR
flight with passengers
one synodic period later, in
2024.
BFR progress in current
scenario:
On 29 September 2017 at the 68th annual meeting of the International Astronautical
Congress in Adelaide, South Australia, SpaceX unveiled the new smaller vehicle architecture. The new
launch vehicle system—program codename BFR—would be a 9-meter (30 ft) diameter technology,
using methalox-fueled Raptor rocket engine technology directed initially at the Earth-orbit
and cislunar near-Earth environment before, later, being used for Mars missions. Musk said "we are
searching for the right name, but the code name, at least, is BFR.
Aerodynamics of the BFR second stage was changed. The new version is cyllindrical with small fins at
the rear end. The cyllindrical shape is for mass optimization. The fins are needed to allow the ship to land
both on Earth and Mars, with both large and minimal payloads. There are three configurations:
BFR crew, BFR cargo, BFR tanker. The first two are primarily destined to fly to Mars. The cargo version
can also be used to launch satelites to Low Earth Orbit. Initially, the cargo and tanker versions were the
same.
After refueling in high Earth orbit the spacecraft will be able to land on the Moon and return to Earth
without further refueling. The most surprising announcement was to use BFR as a point-to-point transfer
I –Brook (Volume 1, Issue 3) July – December 2017 |56
system for people on Earth. Musk expects ticket price to be on par with an economy plane ticket for the
same distance.
As of September 2017, Raptor engines have been tested for a combined total of 1200 seconds of test
firing time over 42 main engine tests. The longest test was 100 seconds, which is limited by the size of
the propellant tanks at the SpaceX ground test facility. The test engine operates at 200 atmospheres of
pressure. The flight engine is aimed for
250 bar, and SpaceX expects to achieve
300 bar in later iterations.
In addition, Musk championed a
larger systemic vision, a vision for
a bottom-up emergent order of other
interested parties—whether companies,
individuals, or governments—to utilize the
new and radically lower-cost transport
infrastructure that SpaceX would endeavor
to build in order to help build a sustainable
human civilization on
Mars by innovating and meeting the dema
nd that such a growing venture would
occasion.
In the November 2016 plan, SpaceX indicated it would fly its earliest research spacecraft missions to
Mars using its Falcon Heavy launch vehicle and a specialized modified Dragon spacecraft, called "Red
Dragon" prior to the completion, and first launch, of any ITS vehicle. Later Mars missions using ITS were
slated then to begin no earlier than 2022.
By February 2017, the earliest launch of
any SpaceX mission to Mars was to be
2020, two years later than the previously
mentioned 2018 Falcon Heavy/Dragon2
exploratory mission. In July 2017,
SpaceX announced it would no longer
plan to use a propulsively-landed "Red
Dragon" spacecraft on the early
missions, as had been previously
announced.
In July 2017, SpaceX made public plans
to build a much smaller launch vehicle
and spacecraft prior to building the ITS
launch vehicle that had been unveiled
nine months earlier for just the beyond Earth orbit part of future SpaceX launch service offerings. Musk
indicated that the architecture has "evolved quite a bit" since the November 2016 articulation of the
comprehensive Mars architecture. A key driver of the new architecture is to make the new system useful
for substantial Earth-orbit and Cislunar launches so that the new system might pay for itself, in part,
through economic spaceflight activities in the near-Earth space zone.
I –Brook (Volume 1, Issue 3) July – December 2017 |57
Developing the rocket: Musk made one thing very clear: SpaceX‘s future is the BFR. The company is no
longer going to put resources into improving its current line of Falcon 9 vehicles or its bigger, next-
generation Falcon Heavy. Instead, all of the company‘s research and development resources will go into
creating the new monster rocket.
The revenue SpaceX currently
receives from launching satellites
and servicing the International Space
Station will also go toward funding
the development of the rocket, Musk
said. Right now, business does seem
to be good: SpaceX has a full
manifest of customers, and the
company significantly increased its
launch frequency to 13 so far this
year (up from eight last year).
NASA is also paying SpaceX to
send cargo, and soon astronauts, to
the ISS.
It‘s possible that SpaceX‘s satellite
business and NASA contracts are enough to fund the BFR‘s development. But it‘s likelier that the
company will need additional funds — especially if Musk hopes to meet his ―aspirational‖ deadline of
sending the vehicle to the Red Planet by 2022. Private investment seems like an option. And another good
source of money? -The government!!
Once it built then what?
Assuming SpaceX does get the money it needs to develop the BFR, then what? Will there be enough
customers to help offset the development costs and make SpaceX profitable?
Musk advertised a number of uses for the BFR, beyond just going to the Moon and Mars. He argued that
the new system would essentially replace the Falcon 9 rocket and Dragon spacecraft, and that SpaceX
could use the new vehicle to launch satellites, service the space station, and even clean up space debris in
orbit. And the more uses a rocket has, the more potential customers the rocket has, too.
If the cost of the vehicle is low enough,
it may eventually create its own demand.
But that demand may not materialize for
a while. Plus, SpaceX needs to build the
BFR first — and given Musk‘s lack of
specificity in terms of cold, hard cash,
it‘s possible only SpaceX‘s accountants
know if the money is really there to do it.
I –Brook (Volume 1, Issue 3) July – December 2017 |58
"The apps you need to survive a natural disaster"
Alisha Neogi
CSE – 3rd
year
When nature unleashes it's fury on your part of the world, it's of course a fantastic idea to lay in all the
emergency supplies recommended by the authorities. But sometimes when it comes to disaster, it's not
clean water or boarded up windows that will save you.
When the next big earthquake
strikes the Bay Area, millions
will likely be stranded without
the high-tech comforts provided
by Silicon Valley.
As evidenced in Texas, Florida
and Puerto Rico following three
different hurricanes, disasters can
wipe away Wi-Fi and cellular
data infrastructure, making
modern technology obsolete.
Twenty-eight years ago this month, the 6.9 magnitude Loma Prieta Earthquake did major damage to the
Bay Area - albeit at a time when the internet was not prevalent. And this month has seen ravaging
wildfires in the North Bay wipe out smartphone and internet service.
But some forward thinking and a few smartphone apps can be a valuable companion to navigate a disaster
and its aftermath - even when there is little to no data connection, experts say.
"We try to tell people it's important to have a plan," said Jennifer Strauss, the University of California at
Berkeley's Seismology Lab's external relations officer. "With all the tech we are exposed to, we get
caught up in the idea that everything is readily available. Tech goes hand in hand with preparedness."
Strauss and her team in Berkeley in 2016 launched the MyShake app, which allows smartphones to
detect earthquakes using built-in sensors and send warning alerts to users near the shaking. Akin to step-
counting fitness apps, MyShake runs silently in the background looking for seismic tremors and
collecting data.
While there are 40,000 active MyShake users in the world on Android, the ultimate goal is for MyShake
to become a portable earthquake siren for regions near faults and without a public earthquake warning
I –Brook (Volume 1, Issue 3) July – December 2017 |59
system. MyShake is not yet available on iPhones, Strauss said. While MyShake may prove useful in
future earthquakes, its effectiveness may be limited in the Bay Area. Since the Bay Area sits atop
earthquake faults, Bay Area residents may have little to no time to respond to an alert should the epicenter
be in the Bay Area, according to USGS geophysicist Brian Kilgore. One such fault is the Hayward Fault,
which runs through the entire East Bay.
The Hayward Fault has a 72 percent chance of 6.7 or larger earthquake before 2043 and has not
experienced a large earthquake since 1868.
"It's not of if, it's of when," said Kilgore. "A large earthquake in the Hayward Fault is going to be a large
disaster. Much of the damage is simply unavoidable."
The Bay Area's telecommunications infrastructure is considered to be more resilient than other regions
recently hit by natural disasters like Puerto Rico, which lost communications for days. Verizon, for
example, has spent over $11 billion in wireless infrastructure, in part to shore up cell towers and server
centers from earthquakes in California, according to its spokesperson Heidi Flato.
"Because earthquakes cause lateral and vertical movements, Verizon's mobile switching centers in
California are designed to withstand likely seismic movement patterns," Flato said in a statement.
Even if there are signals, another issue is that service could be quickly overwhelmed by panicked
survivors making phone calls and sending texts right after an earthquake, Kilgore said.
But there are alternative means of communication, which have proven to be popular in dire
circumstances. One such app is Zello, which turns smartphones into walkie-talkies. The app was
wildly popular in Texas and Florida during their respective hurricanes, topping the Apple App Store
charts with over 6 million new users and becoming a necessity for hurricane rescue volunteers.
Zello requires at least marginal 2G connection, the predecessor to the newer and faster 3G and 4G
networks, according to its CEO Bill Moore.
"(Zello) is popular when the stakes are high because it's based around live voice," said Moore. "Your
voice communicates a lot more than text can. You don't have to read it, you can listen while driving. It's
authentic, so in a few seconds of voice, you can guess the emotional state of the voice."
I –Brook (Volume 1, Issue 3) July – December 2017 |60
Another app is FireChat, which allows users to text each other even without Wi-Fi or cellular connection.
The app uses mesh networking, which means the texts bounce from one smartphone to another nearby
smartphone with a FireChat app until it reaches its destination.
FireChat was immensely popular during the Hong Kong political protests in 2014 when smartphones
could not get a signal because of overcrowding in a tight area and Chinese authorities blocked apps like
Twitter and Instagram. Open Garden, the San Francisco-based startup that created FireChat, did not
respond to a request for comment.
If person-to-person communication fails, FM radio may be the best bet to stay informed after a disaster.
Plenty of FM radio apps are available for both iOS and Android. One such app called NextRadio allows
Android phones to be turned into a FM radio without using any data by activating the phone's FM chip
inside its processor. All it requires is an earphone or stereo cable attached to the smartphone as an
antenna.
During Hurricane Irma, NextRadio was a valuable tool for Floridians weathering the storm. In select
markets, listener and session counts increased by more than 1000 percent.
NextRadio is available on iOS, but iPhones only allow FM radio to be transmitted through cellular
signals. During Irma, Apple faced pressure from government leaders - such as FCC Chairman Ajit Pai -
and local organizations in Florida to allow iPhone owners to activate the FM chip. Apple said there is no
FM chip in its iPhone 7 and 8 models.
While the digital preparation for the big one is critical, Kilgore, the USGS geophysicist, also says people
living in quake-prone regions shouldn't prioritize it over other survival preparations like water, food and
shelter.
"It depends so much on how much infrastructure survives and stays intact," said Kilgore. "There's not
much a single individual can do about that."
Just don't forget to charge your phone before the disaster strikes because none of these apps will help you
much once you run out of battery. Portable and car chargers can be a great way to keep your phone juiced
up even if the power goes out.
The number of available apps in the Google Play Store was most recently placed at 3.3
million apps in September 2017, after surpassing 1 million apps in July 2013. Google
Play was originally launched in October 2008 under the name Android Market.
I –Brook (Volume 1, Issue 3) July – December 2017 |61
"Augmented Reality"
Biswadeepam Pal
CSE – 4th year
To understand Augmented Reality (AR) we must first be thorough with the concept of Virtual Reality
(VR). Here is a brief overview of it.
What is Virtual Reality?
Virtual reality (VR) is an artificial, computer-generated
simulation or recreation of a real life environment or
situation. It immerses the user by making them feel like they
are experiencing the simulated reality firsthand, primarily by
stimulating their vision and hearing.
VR is typically achieved by wearing a headset like
Facebook‘s Oculus equipped with the technology, and is
used prominently in two different ways:
To create and enhance an imaginary reality for gaming, entertainment, and play (Such as video
and computer games, or 3D movies, head mounted display).
To enhance training for real life environments by creating a simulation of reality where people
can practice beforehand (Such as flight simulators for pilots).
Virtual reality is possible through a coding language known as VRML (Virtual Reality Modeling
Language), which can be used to create a series of images, and specify what types of interactions are
possible for them.
Moving on to the concept of Augmented Reality,
What is Augmented Reality?
Augmented reality (AR) is a technology that layers computer-
generated enhancements atop an existing reality in order to make
it more meaningful through the ability to interact with it. AR is
developed into apps and used on mobile devices to blend digital
components into the real world in such a way that they enhance
one another, but can also be told apart easily.
AR technology is quickly coming into the mainstream. It is used
to display score overlays on telecasted sports games and pop out 3D emails, photos or text messages on
mobile devices. Leaders of the tech industry are also using AR to do amazing and revolutionary things
with holograms and motion activated commands.
Hardware Related to Augmented Reality
Hardware components for augmented reality are: processor, display, sensors and input devices.
Modern mobile computing devices like smartphones and tablet computers contain these elements which
Person Wearing a Virtual Reality Headset
Augmented reality in games
I –Brook (Volume 1, Issue 3) July – December 2017 |62
often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making
them suitable AR platforms.
Display
Various technologies are used in Augmented Reality
rendering including optical projection
systems, monitors, hand held devices, and display
systems worn on the human body.
A head-mounted display (HMD) is a display device
paired to the forehead such as a harness or helmet.
HMDs place images of both the physical world and
virtual objects over the user's field of view. Modern
HMDs often employ sensors for six degrees of
freedom monitoring that allow the system to align
virtual information to the physical world and adjust accordingly with the user's head movements. HMDs
can provide VR users mobile and collaborative experiences. Specific providers, such
as uSens and Gestigon, are even including gesture controls for full virtual immersion.
Eyeglasses
AR displays can be rendered on devices resembling eyeglasses.
Versions include eyewear that employs cameras to intercept the
real world view and re-display its augmented view through the
eye piecesand devices in which the AR imagery is projected
through or reflected off the surfaces of the eyewear lens pieces.
Contact lenses
Contact lenses that display AR imaging are in
development. These bionic contact lenses might contain
the elements for display embedded into the lens
including integrated circuitry, LEDs and an antenna for
wireless communication. The first contact lens display
was reported in 1999 and subsequently, 11 years later
in 2010/2011. Another version of contact lenses, in
development for the U.S. Military, is designed to function with AR spectacles, allowing soldiers to focus
on close-to-the-eye AR images on the spectacles and distant real world objects at the same
time. The futuristic short film Sight features contact lens-like augmented reality devices.
Applications
Archaeology
AR was applied to aid archaeological research. By augmenting archaeological features onto the modern
landscape, AR allowed archaeologists to formulate possible site configurations from extant structures.
Architecture
AR can aid in visualizing building projects. Computer-generated images of a structure can be
superimposed into a real life local view of a property before the physical building is constructed there;
Apple’s iPhone X revolutionizes the AR concept
Augmented Reality Eyewear
AR embedded contact lens
I –Brook (Volume 1, Issue 3) July – December 2017 |63
this was demonstrated publicly by Trimble Navigation in 2004. AR can also be employed within an
architect's workspace, rendering into their view animated 3D visualizations of their 2D drawings.
Architecture sightseeing can be enhanced with AR applications allowing users viewing a building's
exterior to virtually see through its walls, viewing its interior objects and layout.
Education
In educational settings, AR has been used to complement
a standard curriculum. Text, graphics, video, and audio
were superimposed into a student‘s real time
environment. Textbooks, flashcards and other educational
reading material contained embedded ―markers‖ or
triggers that, when scanned by an AR device, produced
supplementary information to the student rendered in a
multimedia format.
As AR evolved students could participate interactively.
Computer-generated simulations of historical events,
exploring and learning details of each significant area of the event site could come alive.On higher
education, there are some applications that can be used. Construct3D, a Studierstube system, allowed
students to learn mechanical engineering concepts, math or geometry.Chemistry AR apps allowed
students to visualize and interact with the spatial structure of a molecule using a marker object held in a
hand.Anatomy students could visualize different systems of the human body in three dimensions.
Military
An interesting early application of AR occurred when Rockwell International created video map overlays
of satellite and orbital debris tracks to aid in space observations at Air Force Maui Optical System. In
their 1993 paper "Debris Correlation Using the Rockwell WorldView System" the authors describe the
use of map overlays applied to video from space surveillance telescopes. The map overlays indicated the
trajectories of various objects in geographic coordinates. This allowed telescope operators to identify
satellites, and also to identify – and catalog – potentially dangerous space debris.
Starting in 2003 the US Army integrated the SmartCam3D augmented reality system into the Shadow
Unmanned Aerial System to aid sensor operators using telescopic cameras to locate people or points of
interest. The system combined both fixed geographic information including street names, points of
interest, airports, and railroads with live video from the camera system. The system offered "picture in
picture" mode that allows the system to show a synthetic view of the area surrounding the camera's field
of view. This helps solve a problem in which the field of view is so narrow that it excludes important
context, as if "looking through a soda straw". The system displays real-time friend/foe/neutral location
markers blended with live video, providing the operator with improved situational awareness.
Retail
Augmented reality is becoming more frequently used for online advertising. Retailers offer the ability to
upload a picture on their website and "try on" various clothes which is overlaid on the picture. Even
further, companies such as Bodymetrics install dressing booths in department stores that offer full-body
scanning. These booths render a 3-D model of the user, allowing the consumers to view different outfits
on themselves without the need of physically changing clothes.
AR embedded globe
I –Brook (Volume 1, Issue 3) July – December 2017 |64
"Wireless ad hoc network"
Deep Narayan Biswas
CSE – 4th
year
Introduction
A wireless ad hoc network (WANET) is a decentralized type of wireless network. The network is ad
hoc because it does not rely on a pre-existing infrastructure, such as routers in wired networks or access
points in managed (infrastructure) wireless networks. Instead, each node participates in routing by
forwarding data for other nodes, so the determination of which nodes forward data is made dynamically
on the basis of network connectivity. In addition to the classic routing, ad hoc networks can
use flooding for forwarding data.
Wireless mobile ad hoc networks are self-configuring, dynamic networks in which nodes are free to
move. Wireless networks lack the complexities of infrastructure setup and administration, enabling
devices to create and join networks "on the fly" – anywhere, anytime.
History
The earliest wireless data network is called "packet radio" network, and was sponsored by Defense
Advanced Research Projects Agency (DARPA) in the early 1970s. Bolt, Beranek and Newman
Technologies (BBN) and SRI International designed, built, and experimented with these earliest systems.
Experimenters included Robert
Kahn. Jerry Burchfiel, and Ray
Tomlinson.Similar experiments took
place in the Ham radio community.
These early packet radio systems
predated the Internet, and indeed were
part of the motivation of the original
Internet Protocol suite. Later DARPA
experiments included the Survivable
Radio Network (SURAN) project,
which took place in the 1980s.
Another third wave of academic
activity started in the mid-1990s with
the advent of inexpensive 802.11
radio cards for personal computers. Current wireless ad-hoc networks are designed primarily for military
utility. Problems with packet radios are: (1) bulky elements, (2) slow data rate, (3) unable to maintain
links if mobility is high. The project did not proceed much further until the early 1990s when wireless ad
hoc networks are born.
Early work
In the early 1990s, Charles Perkins from SUN Microsystems USA, and Chai KeongToh from Cambridge
University separately started to work on a different Internet, that of a wireless ad hoc network. Perkins
was working on the dynamic addressing issues. Toh worked on a new routing protocol, which was known
as ABR – Associativity-Based Routing. Perkins eventually proposed AODV routing, which is based on
link-state routing. Toh's proposal was an on-demand based routing, i.e. routes are discovered on-the-fly in
real-time as and when is needed. Both ABR] and AODV are submitted to IETF as RFCs. ABR was
implemented successfully into Linux OS on Lucent WaveLAN 802.11a enabled laptops and a practical ad
hoc mobile network was therefore proven to be possible in 1999. AODV was subsequently proven and
I –Brook (Volume 1, Issue 3) July – December 2017 |65
implemented in 2005. In 2007, David Johnson and Dave Maltz proposed DSR – Dynamic Source
Routing.
Application
The decentralized nature of wireless ad-hoc networks makes
them suitable for a variety of applications where central nodes
can't be relied on and may improve the scalability of networks
compared to wireless managed networks, though theoretical
and practical limits to the overall capacity of such networks
have been identified. Minimal configuration and quick
deployment make ad hoc networks suitable for emergency
situations like natural disasters or military conflicts. The
presence of dynamic and adaptive routing protocols enables ad
hoc networks to be formed quickly.
Wireless ad-hoc networks can be further classified by
their application:
Mobile ad hoc networks (MANETs)
A mobile ad hoc network (MANET) is a continuously
self-configuring, infrastructure-less network of mobile
devices connected without wires.
Vehicular ad hoc networks (VANETs)
VANETs are used for communication between vehicles
and roadside equipment. Intelligent vehicular ad hoc
networks (InVANETs) are a kind of artificial
intelligence that helps vehicles to behave in intelligent
manners during vehicle-to-vehicle collisions, accidents.
Vehicles are using radio waves to communicate with each other.
Smartphone ad hoc networks (SPANs)
SPANs leverage the existing hardware (primarily Bluetooth) in commercially available smartphones to
create peer-to-peer networks without relying on cellular carrier networks, wireless access points, or
traditional network infrastructure.
Internet-based mobile ad hoc networks (iMANETs)
iMANETs are ad hoc networks that link mobile nodes and fixed Internet-gateway nodes.
Military and tactical MANETs
Military MANETs are used by military units with emphasis on security, range, and integration with
existing systems.
Routing
Proactive routing
This type of protocols maintains fresh lists of destinations and their routes by periodically distributing
routing tables throughout the network. The main disadvantages of such algorithms are:
I –Brook (Volume 1, Issue 3) July – December 2017 |66
Respective amount of data for maintenance.
Slow reaction on restructuring and failures.
Example: Optimized Link State Routing Protocol (OLSR)
Distance vector routing
As in a fix net nodes maintain routing tables. Distance-vector protocols are based on calculating the
direction and distance to any link in a network. "Direction" usually means the next hop address and the
exit interface. "Distance" is a measure of the cost to reach a certain node. The least cost route between
any two nodes is the route with minimum distance. Each node maintains a vector (table) of minimum
distance to every node. The cost of reaching a destination is calculated using various route
metrics. RIP uses the hop count of the
destination whereas IGRP takes into
account other information such as
node delay and available bandwidth.
Reactive routing
This type of protocol finds a route
based on user and traffic demand by
flooding the network with Route
Request or Discovery packets. The
main disadvantages of such
algorithms are:
High latency time in route
finding.
Excessive flooding can lead to network clogging.
However, clustering can be used to limit flooding. The latency incurred during route discovery is not
significant compared to periodic route update exchanges by all nodes in the network.
Example: Ad hoc On-Demand Distance Vector Routing (AODV)
Flooding
Is a simple routing algorithm in which every incoming packet is sent through every outgoing link except
the one it arrived on. Flooding is used in bridging and in systems such as Usenet and peer-to-peer file
sharing and as part of some routing protocols, including OSPF, DVMRP, and those used in wireless ad
hoc networks.
Hybrid routing
This type of protocol combines the advantages of proactive and reactive routing. The routing is initially
established with some proactively prospected routes and then serves the demand from additionally
activated nodes through reactive flooding. The choice of one or the other method requires
predetermination for typical cases. The main disadvantages of such algorithms are:
1. Advantage depends on number of other nodes activated.
2. Reaction to traffic demand depends on gradient of traffic volume.
Example: Zone Routing Protocol (ZRP)
I –Brook (Volume 1, Issue 3) July – December 2017 |67
Position-based routing
Position-based routing methods use
information on the exact locations of
the nodes. This information is obtained
for example via a GPS receiver. Based
on the exact location the best path
between source and destination nodes
can be determined.
Example: "Location-Aided Routing in
mobile ad hoc networks" (LAR)
Mathematical models
The traditional model is the random geometric graph.
A randomly constructed geometric graph drawn inside a square
These are graphs consisting of a set of nodes placed according to
a point process in some usually bounded subset of the n-
dimensional plane, mutually coupled according to
a boolean probability mass function of their spatial separation (see
e.g. unit disk graphs). The connections between nodes may have
different weights to model the difference in channel
attenuations. One can then study network observables (such
as connectivitycentrality or the degree distribution) from a graph-
theoretic perspective. One can further study network protocols and
algorithms to improve network throughput and fairness.
Pros and cons
Pros
No expensive infrastructure must be installed
Use of unlicensed frequency spectrum
Quick distribution of information around sender
No single point of failure.
Cons
All network entities may be mobile ⇒ very dynamic topology
Network functions must have high degree of adaptability
No central entities ⇒ operation in completely distributed
manner.
I –Brook (Volume 1, Issue 3) July – December 2017 |68
"DIGITAL CASH"
Ripa Ghosh
CSE, 3rd
Year
What is Digital Cash?
A payment message bearing a digital signature which functions as a medium of exchange or store of
value
Need to be backed by a trusted third party, usually the government and the banking industry.
Ideal properties of a Digital Cash system:
Ideal properties:
1. Secure. Alice should be able to pass digital cash to Bob without either of them, or others, able to
alter or reproduce the electronic token.
2. Anonymous. Alice should be able to pay Bob without revealing her identity, and without Bob
revealing his identity. Moreover, the Bank should not know who Alice paid or who Bob was paid
by. Even stronger, they should have the option to remain anonymous concerning the mere
existence of a payment on their behalf.
3. Portable. The security and use of the digital cash is not dependent on any physical location. The
cash should be able to be stored on disk or USB memory stick, sent by email, SMS, internet chat,
or uploaded on web forms. Digital cash should not be restricted to a single, proprietary computer
network.
4. Two-way. Peer-to-peer payments are possible without either party required to attain registered
merchant status (in contrast with today's card-based systems). Alice, Bob, Carol, and David share
an elaborate dinner together at a trendy restaurant and Alice pays the bill in full. Bob, Carol, and
David each should then be able to transfer one-fourth of the total amount in digital cash to Alice.
5. Off-line capable. The protocol between the two exchanging parties is executed off-line, meaning
that neither is required to be host-connected in order to proceed. Availability must be unrestricted.
Alice can freely pass value to Bob at any time of day without requiring third-party authentication.
6. Wide acceptability. The digital cash is well-known and accepted in a large commercial zone.
With several digital cash providers displaying wide acceptability, Alice should be able to use her
preferred unit in more than just a restricted local setting.
7. User-friendly. The digital cash should be simple to use from both the spending perspective and
the receiving perspective. Simplicity leads to mass use and mass use leads to wide acceptability.
Alice and Bob should not require a degree in cryptography as the protocol machinations should
be transparent to the immediate user.
These are ideal properties, and no known system satisfies them all.
Categorization of payment systems
Implementations of payment systems that don't satisfy all the requirements may be conveniently classified
according to these criteria:
I –Brook (Volume 1, Issue 3) July – December 2017 |69
1. Anonymous or identified. Anonymous e-cash works just like real paper cash. Once anonymous
e-cash is withdrawn from an account, it can be spent or given away without leaving a transaction
trail. This however, can be considered contentious. Identified payment systems such as credit card
payment, or payment by Paypal leave an audit trail, and the identity of the payee and the payer is
known to the Bank, and (usually) to each other.
2. Online or offline. Online means you need to interact with a bank (via a network) to conduct a
transaction with a third party. Offline means you can conduct a transaction without having to
directly involve a bank.
3. Requiring a trusted platform. Some protocols may require a trusted platform, such as a smart
card. Smart cards are small plastic cards like credit cards, bearing a chip. They are tamper-
resistant and can force Alice and Bob to adhere to the protocol. This is convenient for the
protocol designer, but threatens to tie users to proprietary interfaces and to remove transparency
of the system. In contrast, internet protocols endorsed by the IETF are open and can be
interoperably implemented by anyone.
The Online Model
Pros and Cons of the online scheme
Pros
– Provides fully anonymous and untraceable digital cash.
– No double spending problems.
– Don't require additional secure hardware – cheaper to implement.
Cons
– Communications overhead between merchant and the bank.
– Huge database of coin records.
I –Brook (Volume 1, Issue 3) July – December 2017 |70
– Difficult to scale, need synchronization between bank servers.
– Coins are not reusable
The Offline Model
Pros and Cons of the offline model
Advantages
– Off-line scheme
– User is fully anonymous unless double spend
– Bank can detect double spender
– Banks don‘t need to synchronize database in each transaction.
– Coins could be reusable
– Reduced the size of the coin database.
Disadvantages
– Might not prevent double spending immediately
– More expensive to implement
On 8 November 2016, the Government of India announced
the demonetisation of all Rs. 500 and Rs. 1,000 banknotes of
the Mahatma Gandhi Series. The government claimed that the
action would curtail the shadow economy
and crack down on the use of illicit and
counterfeit cash to fund illegal activity and
terrorism.
I –Brook (Volume 1, Issue 3) July – December 2017 |71
"DNA Chip Or Microarray"
Gobinda Santra
CSE – 4th
year
INTRODUCTION:
Molecular Biology research evolves through the development of the technologies used for carrying them
out. It is not possible to research on a large number of genes using traditional methods. DNA Microarray
is one such technology which enables the researchers to investigate and address issues which were once
thought to be non traceable. One can analyze the expression of many genes in a single reaction quickly
and in an efficient manner. DNA Microarray technology has empowered the scientific community to
understand the fundamental aspects underlining the
growth and development of life as well as to
explore the genetic causes of anomalies occurring
in the functioning of the human body.
DNA microarray which is also know as DNA chip
or bio chip , it is a collection of microscopic DNA
spots attatched to a solid surface in which 1000‘s
of nucleic acids are bound on the solid surface and
used to measure the relative concentration of
nucleic acid sequences in a mixture via
hybridization and subsequent detection of the
hybridization events.
A typical microarray experiment involves the hybridization of an mRNA molecule to the DNA template
from which it is originated. Many DNA samples are used to construct an array. The amount of mRNA
bound to each site on the array indicates the expression level of the various genes. This number may run
in thousands. All the data is collected and a profile is generated for gene expression in the cell.
The early history of DNA arrays:
An argument can be made that the original DNA array was created with the colony hybridization method
of Grunstein and Hogness Grunstein and Hogness, 1975. In this procedure, DNA of interest was
randomly cloned into E. coli plasmids that were plated onto agar petri plates covered with nitrocellulose
filters. Replica plating was used to produce additional agar plates. The colonies on the filters were lysed
and their DNA‘s were denatured and fixed to the filter to produce a random and unordered collection of
DNA spots that represented the cloned fragments. Hybridization of a radiolabeled probe of an DNA or
RNA of interest was used to rapidly screen 1000‘s of colonies to identify clones containing DNA that was
complimentary to the probe.
I –Brook (Volume 1, Issue 3) July – December 2017 |72
In 1979, this approach was adapted to create ordered arrays by Gergen et. al. Gergen et al., 1979 who
picked colonies into 144 well microplates. They created a mechanical 144 pin device and a jig that
allowed them to replicate multiple microtiter plates on agar and produce arrays of 1728 different colonies
in a 26 × 38 cm region. An additional transfer of colonies to squares of Whatman filter paper followed by
a growth, lysis, denaturation and fixing of the DNA to the filter, allowed the production of DNA arrays on
filters that could be re-used multiple times. During the next decade, filter based arrays and protocols
similar to these were used in a variety of applications including: cloning genes of specific interest,
identifying SNP‘s Miller and Barnes, 1986, cloning genes that are differentially expressed between two
samples Crampton et al., 1980 and physical mapping Craig et al., 1990.
In the late 1980‘s and early 1990‘s Hans Lehrach‘s group automated these processes by using robotic
systems to rapidly array clones from microtiter plates onto filters(Craig et al., 1990; Lennon and Lehrach,
1991). The concomitant development of cDNA cloning in the late 1970‘s and early 80‘s (Auffray et al.,
1980; Auffray and Rougeon, 1980a; Auffray and Rougeon, 1980b; Humphries et al., 1977) combined
with international programs to fully sequence both the human genome (Barnhart, 1989; Watson and
Jordan, 1989) and the human transcriptome(Aaronson et al., 1996; Dias Neto et al., 2000) led to efforts to
create reference sets of cDNAs and cDNA filter arrays for human(Lennon et al., 1996) and other
genomes(Bonaldo et al., 1996) By the late 1990‘s and early 2000‘s, sets of non-redundant cDNA‘s
became widely available and the complete genome sequences of some organisms allowed for sets of PRC
products representing all the known open reading frames (ORFs) in small genomes (Lashkari et al.,
1997; Richmond et al., 1999). These sets, combined with readily available robotics, allowed individual
labs to make their own cDNA or ORF arrays that containing gene content that represented the vast
majority of genes in a genome.
The birth of the modern DNA array:
In the late 90‘s and 2000‘s, DNA array technology progressed rapidly as both new methods of
production and fluorescent detection were adapted to the task. In addition, increases in our knowledge of
the DNA sequences of multiple genomes provided the raw information necessary to assure that arrays
could be made which fully represented the genes in a genome, all the sequence in a genome or a large
fraction of the sequence variation in a genome. It should also be noted that during this time, there was a
gradual transition from spotting relatively long DNA‘s on arrays to producing arrays using 25-60bp
oligos. The transition to oligo arrays was made possible by the increasing amounts of publicly available
DNA sequence information. The use of oligos (as opposed to longer sequences) also provided an increase
in specificity for the intended binding target as oligos could be designed to target regions of genes or the
genome that were most dissimilar from other genes or regions. Three basic types of arrays came into play
during this time frame, spotted arrays on glass, in-situ synthesized arrays and self assembled arrays.
Uses and types:
Three basic types of microarrays:
(A) Spotted arrays on glass (B) Self assembled arrays (C) In-situ synthesized arrays.
I –Brook (Volume 1, Issue 3) July – December 2017 |73
(A) Spotted arrays:
In 1996 Derisi et. al. published a method which allowed very high-density DNA arrays to be
made on glass substrates(DeRisi et al., 1996). Poly-lysine coated glass microscope slides provided good
binding of DNA and a robotic spotter was designed to spot multiple glass slide arrays from DNA stored
in microtiter dishes. By using slotted pins (similar to fountain pens in design) a single dip of a pin in DNA
solution could spot multiple slides. Spotting onto glass, allowed one to fluorescently label the sample.
Fluorescent detection provided several advantages relative to the radioactive or chemilluminescent labels
common to filter based arrays. First, fluorescent detection is quite sensitive and has a fairly large dynamic
range. Second, fluorescent labeling is generally less expensive and less complicated than radioactive or
chemilluminescent labeling. Third, fluorescent labeling allowed one to label two (or potentially more)
samples in different colors and cohybridize the samples to the same array. As it was very difficult to
reproducibly produce spotted arrays, comparisons of individually hybridized samples to ostensibly
identical arrays would result in false differences due to array-to-array variation. However, a two-color
approach in which the ratio of signals on the same array are measured is much more reproducible.
(B) Self assembled arrays:
An alternative approach to the construction of arrays was created by the group of David Walt at Tufts
University(Ferguson et al., 2000; Michael et al., 1998; Steemers et al., 2000; Walt, 2000) and ultimately
licensed to Illumina. Their method involved synthesizing DNA on small polystryrene beads and
depositing those beads on the end of a fiber optic array in which the ends of the fibers were etched to
provide a well that is slightly larger than one bead. Different types of DNA would be synthesized on
different beads and applying a mixture of beads to the fiber optic cable would result in a randomly
assembled array. In early versions of these arrays, the beads were optically encoded with different
fluorophore combinations in order to allow one to determine which oligo was in which position on the
array (referred to as ―decoding the array‖)(Ferguson et al., 2000; Michael et al., 1998; Steemers et al.,
2000; Walt, 2000). Optical decoding by fluorescent labeling limited the total number of unique beads that
could be distinguished. Hence, the later and present day methods for decoding the beads involve
hybridizing and detecting a number of short, fluorescently labeled oligos in a sequential series of
steps(Gunderson et al., 2004). This not only allows for an extremely large number of different types of
beads to be used on a single array but also functionally tests the array prior to its use in a biological assay.
Later versions of the Illumina arrays used a pitted glass surface to contain the beads instead of a fiber
option arrays.
(C)In-situ, Synthesized arrays:
In 1991 Fodor et.al. published a method for light directed, spatially addressable chemical synthesis which
combined photolabile protecting groups with photolithography to perform chemical synthesis on a solid
substrate(Fodor et al., 1991). In this initial work, the authors demonstrated the production of arrays of 10-
amino acid peptides and, separately, arrays of di-nucleotides. In 1994, Fodor et.al. at the recently formed
company of Affymetrix demonstrated the ability to use this technology to generate DNA arrays consisting
of 256 different octa-nucleotides (Pease et al., 1994). By 1995-1996, Affymetrix arrays were being used
to detect mutations in the reverse transcriptase and protease genes of the highly polymorphic HIV-1
genome(Lipshutz et al., 1995) and to measure variation in the human mitochondrial genome(Chee et al.,
1996). Eventually, Affymetrix used this technology to develop a wide catalogue of DNA arrays for use in
expression analysis(Lockhart et al., 1996; Wodicka et al., 1997), genotyping (Chee et al., 1996; Hacia et
al., 1996) and sequencing (G Wallraff, 1997)(see www.Affymetrix.com for the current catalog of arrays).
A major advantage of the Affymetrix technology is that because the DNA sequences are directly
synthesized on the surface, only a small collection of reagents (the 4 modified nucleotides, plus a small
I –Brook (Volume 1, Issue 3) July – December 2017 |74
handful of reagents necessary for the de-blocking and coupling steps) are needed to construct an
arbitrarily complex array. This contrasts with the spotted array technologies in which one needed to
construct or obtain all the sequences that one wished to deposit on the array in advance of array
construction. However, the initial Affymetrix technology was limited in flexibility as each model of array
required the construction of a unique set of photolithographic masks in order to direct the light to the
array at each step of the synthesis process. In 2002, authors from Nimblegen Systems Inc., published a
method in which the photo-deprotection step of Fodor et. al (Fodor et al., 1991; Lipshutz et al., 1999) is
accomplished using micro-mirrors (similar to those in video computer projectors) to direct light at the
pixels on the array(Nuwaysir et al., 2002). This allows for custom arrays to be manufactured in small
volumes at much lower cost than by photolithographic methods using masks to direct light (which are
cheaper for large volume production). One constraint with this method is that the total number of
addressable pixels (e.g. unit oligos that can be synthesized) is limited to the number of addressable
positions in the micro-mirror device (of order 1M)
The above is not intended to be a comprehensive history or survey of all DNA microarray technologies.
However, it does cover the major advances in the field and the predominate methods of manufacture of
arrays.
Applications of microarrays:
Gene expression profiling: In an mRNA or gene expression profiling experiment the expression
levels of thousands of genes are simultaneously monitored to study the effects of certain treatments,
diseases, and developmental stages on gene expression.
Comparative genomic hybridization: Assessing genome content in different cells or closely
related organisms.
GeneID: Small microarrays to check IDs of organisms in food and feed , mycoplasms in cell
culture, or pathogens for disease detection, mostly combining PCR
and microarray technology.
Fusion genes microarray: A Fusion gene microarray can detect
fusion transcripts, e.g. from cancer specimens. The principle behind
this is building on the alternative splicing microarrays. The oligo
design strategy enables combined measurements of chimeric
transcript junctions with exon-wise measurements of individual
fusion partners.
Microarray Technique:
An array is an orderly arrangement of samples where matching of
known and unknown DNA samples is done based on base pairing
rules. An array experiment makes use of common assay systems
such as microplates or standard blotting membranes. The sample
spot sizes are typically less than 200 microns in diameter usually
contain thousands of spots.
I –Brook (Volume 1, Issue 3) July – December 2017 |75
Thousands of spotted samples known as probes (with known identity) are immobilized on a solid support
(a microscope glass slides or silicon chips or nylon membrane). The spots can be DNA, cDNA, or
oligonucleotides. These are used to determine complementary binding of the unknown sequences thus
allowing parallel analysis for gene expression and gene discovery. An experiment with a single DNA chip
can provide information on thousands of genes simultaneously. An orderly arrangement of the probes on
the support is important as the location of each spot on the array is used for the identification of a gene.
Limitations of DNA microarrays:
At their core, microarrays are simply devices to simultaneously measure the relative concentrations of
many different DNA or RNA sequences. While they have been incredibly useful in a wide variety of
applications, they have a number of limitations. First, arrays provide an indirect measure of relative
concentration. That is the signal measured at a given position on a microarray is typically assumed to be
proportional to the concentration of a presumed single species in solution that can hybridize to that
location. However, due to the kinetics of hybridization, the signal level at a given location on the array is
not linearly proportional to concentration of the species hybridizing to the array. At high concentrations
the array will become saturated and at low concentrations, equilibrium favors no binding. Hence, the
signal is linear only over a limited range of concentrations in solution. Second, especially for complex
mammalian genomes, it is often difficult (if not impossible) to design arrays in which multiple related
DNA/RNA sequences will not bind to the same probe on the array. A sequence on an array that was
designed to detect ―gene A‖, may also detect ―genes B, C and D‖ if those genes have significant sequence
homology to gene A. This can particularly problematic for gene families and for genes with multiple
splice variants. It should be noted that it is possible to design arrays specifically to detect splice variants
either by making array probes to each exon in the genome(Gardina et al., 2006) or to exon
junctions(Castle et al., 2003). However, it is difficult to design arrays that will uniquely detect every exon
or gene in genomes with multiple related genes.
Finally, a DNA array can only detect sequences that the array was designed to detect. That is, if the
solution being hybridized to the array contains RNA or DNA species for which there is no complimentary
sequence on the array, those species will not be detected. For gene expression analysis, this typically
means that genes that have not yet been annotated in a genome will not be represented on the array. In
addition, non-coding RNA‘s that are not yet recognized as expressed are typically not represented on an
array. Moreover, for highly variable genomes such as those from bacteria, arrays are typically designed
using information from the genome of a reference strain. Such arrays may be missing a large fraction of
the genes present in a given isolate of the same species.
The Future of DNA arrays:
Given the limitations of arrays mentioned above, it would be far preferable to have an unbiased method
to directly measure all the DNA or RNA species present in a particular sample. The advent of next
generation sequencing technologies combined with the rapid decrease in the cost of sequencing
Sequencing is a relatively unbiased approach to measuring which nucleic acids are present in solution.
While sample preparation or different enzymes may bias sequencing counts, unlike DNA arrays,
sequencing is not dependent on prior knowledge of which nucleic acids may be present. Sequencing is
I –Brook (Volume 1, Issue 3) July – December 2017 |76
also able to independently detect closely related gene
sequences, novel splice forms or RNA editing that may
be missed due to cross hybridization on DNA
microarrays. As a result of these advantages and the
decreasing cost of sequencing, DNA arrays are being
rapidly replaced by sequencing for nearly every assay
that has been previously performed on microarrays .
As the cost of sequencing is currently dropping by a
factor of two every five months, it‘s likely that DNA
arrays will be fully replaced by sequencing methods
within the next 5-10 years.
Find the words in the maze and mark them!!
“It is our
choices, Harry,
that show what
we truly are, far
more than our
abilities.”
- Harry Potter
and the
Chamber of
Secrets
I –Brook (Volume 1, Issue 3) July – December 2017 |77
―Cuckoo search‖
Gouranga Mondal
CSE – 4th
year
INTRODUCTION : Cuckoo search (CS) is an
optimization algorithm developed by Xin-she Yang
and Suash Deb in 2009. It was inspired by the obligate
brood parasitism of some cuckoo species by laying
their eggs in the nests of other host birds (of other
species).
CUCKOO BEHAVIOR : Cuckoos have an
aggressive reproduction strategy that involves the
female laying her fertilized eggs in the nest of another
species so that the surrogate parents unwittingly raise her brood. Some cuckoo species have evolved in
such a way that female parasitic cuckoos are often very specialized in the mimicry in color and pattern of
the eggs of a few chosen host species. This reduces the probability of eggs being abandoned and increases
their reproductively
CONSEQUENCE : Some host birds can engage direct conflict with the intruding cuckoos. For example,
if a host bird discovers the eggs are not their own, it will either throw these alien eggs away or simply
abandon its nest and build a new nest elsewhere.
REPRESENTATION :
- Each egg in a nest represents a solution, and a
cuckoo egg represents a new solution.
-The aim is to use the new and potentially better
solutions (cuckoos) to replace a not-so-good
solution in the nests.
-The aim is to use the new and potentially better
solutions (cuckoos) to replace a not-so-good solution in the nests.
Fig : Cuckoo bird
Fig : Variants of Cuckoo search
I –Brook (Volume 1, Issue 3) July – December 2017 |78
ADVANTAGE :
-Deals with multi-criteria optimization problems.
-Easy to implement.
- Aims to speed up convergence.
-Simplicity.
-It can be still hybridized with other swarm-based algorithms.
APPLICATION : Some applications of cuckoo search are
-spring design and welded beam design problems.
-Design optimization of truss structures.
-Engineering optimization.
-Steel frames.
-Wind turbine blade.
-Reliability problems.
-Stability analysis.
Friends, Romans and
CSEians!!
Provided below is the
Pseudocode for Cuckoo
Search. Kindly spare your time
and implement the logic in any
programming logic you
know!!
I –Brook (Volume 1, Issue 3) July – December 2017 |79
―THERMOGRAPHY‖
Ishani Dey
CSE – 4th
year
Infrared thermography (IRT), thermal imaging, and thermal video are examples of infrared imaging
science. Thermographic cameras usually detect radiation in the long-infrared range of the electromagnetic
spectrum (roughly 9,000–14,000 nanometers or 9–14 µm) and produce images of that radiation, called
thermograms. Since infrared radiation is emitted by all objects with a temperature above absolute zero
according to the black body radiation law, thermography makes it possible to see one's environment with
or without visible illumination. The amount of radiation emitted by
an object increases with temperature; therefore, thermography
allows one to see variations in temperature.
Some physiological changes in human beings and other warm-
blooded animals can also be monitored with thermal imaging during
clinical diagnostics. Thermography is used in allergy detection and
veterinary medicine. It is also used for breast screening, though
primarily by alternative practitioners as it is considerably less accurate and specific than competing
techniques. Government and airport personnel used thermography to detect suspected swine flu cases
during the 2009 pandemic.
Specialized thermal imaging cameras use focal plane arrays (FPAs) that respond to longer wavelengths
(mid- and long-wavelength infrared). The most common types are InSb, InGaAs, HgCdTe and QWIP
FPA. The newest technologies use low-cost, uncooled microbolometers as FPA sensors. Their resolution
is considerably lower than that of optical cameras, mostly 160x120 or 320x240 pixels, up to 1024×768[3]
for the most expensive models. Thermal imaging cameras are much more expensive than their visible-
spectrum counterparts, and higher-end models are often export-restricted due to the military uses for this
technology.
DIFFERENCE BETWEEN INFRARED FILM AND THERMOGRAPHY
IR film is sensitive to infrared (IR) radiation in the 250 °C to 500 °C ( 482 °F to 932 °F ) range, while the
range of thermography is approximately −50 °C to over
2,000 °C ( -122 °F to over 3632 °F ). So, for an IR film to
work thermographically, it must be over 250 °C ( 482 ° F )
or be reflecting infrared radiation from something that is at
least that hot.
Night vision infrared devices image in the near-infrared,
just beyond the visual spectrum, and can see emitted or
reflected near-infrared in complete visual darkness.
However, again, these are not usually used for
thermography due to the high temperature requirements,
but are instead used with active near-IR sources.
I –Brook (Volume 1, Issue 3) July – December 2017 |80
ADVANTAGES OF THERMOGRAPHY
It shows a visual picture so temperatures over a large area can be compared
◾It is capable of catching moving targets in real time
◾It is able to find deteriorating, i.e., higher temperature
components prior to their failure
◾It can be used to measure or observe in areas inaccessible or
hazardous for other methods
◾It is a non-destructive test method
DISADVANTAGES OF THERMOGRAPHY
Quality cameras often have a high price range (often US$3,000
or more) due to the expense of the larger pixel array (state of the
art 1024X720), while less expensive models (with pixel arrays
of 40x40 up to 160x120 pixels) are also available. Fewer pixels
reduce the image quality making it more difficult to distinguish
proximate targets within the same field of view.
◾Many models do not provide the irradiance measurements
used to construct the output image; the loss of this information
without a correct calibration for emissivity, distance, and
ambient temperature and relative humidity entails that the
resultant images are inherently incorrect measurements of temperature
◾Images can be difficult to interpret accurately when based upon certain objects, specifically objects with
erratic temperatures, although this problem is reduced in active thermal imaging.
MEDICAL USE
Thermography is used to diagnose vascular disease, neuromusculoskeletal disorders and breast tumors. It
collects imaging from 5 to 8 feet away from the body and is capable of producing thousands of pictures
using infrared light.
Thermography can measure the heat that is given off by soft tissue. It can then be compared to another
area of the body with the same structure, such as the right arm compared to the left arm. It can detect
blood flow before and after exercise, showing if blockages are present. Breast cancer detection has proven
to be accurate in 84 percent of cases through the use of thermography. The problem is that images are
hard to interpret and require a very well-trained professional.
APPLICATION IN NASA
The National Aeronautics and Space Administration (NASA) used IR Thermography to measure surface
temperature. It has been used successfully in wind and pressure tunnels. The disadvantages, according to
NASA, are the difficulty to obtain accurate data from models that have less thermophysical and
radiometric properties. Retrieving accurate data can require infrared-transmitting optics that are not
always available. Cameras are not suited for very low temperatures below -50 degrees C. The cost of the
I –Brook (Volume 1, Issue 3) July – December 2017 |81
equipment and the expertise of interpretation is a disadvantage of the practice, as it is in the medical
community.
OTHER APPLICATIONS
Condition monitoring
◾Low Slope and Flat Roofing Inspections
◾Building diagnostics including building
envelope inspections, moisture inspections,
and energy losses in buildings
◾Thermal Mapping
◾Digital infrared thermal imaging in health
care
◾Medical imaging
◾Non-contact thermography, contact
thermography and dynamic angiothermography
◾Peripheral vascular disease screening.
◾Neuromusculoskeletal disorders.
◾Extracranial cerebral and facial vascular disease.
◾Thyroid gland abnormalities.
◾Various other neoplastic, metabolic, and inflammatory conditions.
◾Archaeological Kite Aerial Thermography
◾Thermology
◾Veterinary Thermal Imaging
Thermal imaging cameras convert the energy in the infrared wavelength into a visible light display. All
objects above absolute zero emit thermal infrared energy, so thermal cameras can passively see all
objects, regardless of ambient light. However, most thermal cameras only see objects warmer than −50 °C
( -122 °F ).The spectrum and amount of thermal radiation depend strongly on an object's surface
temperature. This makes it possible for a thermal imaging camera to display an object's temperature.
However, other factors also influence the radiation, which limits the accuracy of this technique. For
example, the radiation depends not only on the temperature of the object, but is also a function of the
emissivity of the object.
I –Brook (Volume 1, Issue 3) July – December 2017 |82
―Touchscreen‖
Kaustav Nandy
CSE – 4th
year
A touchscreen is a source of input device and output device normally layered on the top of an electronic
visual display of an information processing system. A user can give input or control the information
processing system through simple or multi-touch gestures by touching the screen with a special stylus
and/or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work while
others use a special stylus/pen only. The user can use the touchscreen to react to what is displayed and to
control how it is displayed; for example, zooming to increase the text size.
The touchscreen enables the user to interact directly with what
is displayed, rather than using a mouse, touchpad, or any other
intermediate device (other than a stylus, which is optional for
most modern touchscreens).
Touchscreens are common in devices such as game consoles,
personal computers, tablet computers, electronic voting
machines, point of sale systems, and smartphones. They can
also be attached to computers or, as terminals, to networks.
They also play a prominent role in the design of digital
appliances such as personal digital assistants (PDAs) and some e-readers.
The popularity of smartphones, tablets, and many types of information appliances is driving the demand
and acceptance of common touchscreens for portable and functional electronics. Touchscreens are found
in the medical field and in heavy industry, as well as for automated teller machines (ATMs), and kiosks
such as museum displays or room automation, where keyboard and mouse systems do not allow a suitably
intuitive, rapid, or accurate interaction by the user with the display's content.
History
The prototype x-y mutual capacitance touchscreen developed
at CERN in 1977 by Bent Stumpe, a Danish electronics
engineer, for the control room of CERN‘s accelerator SPS
(Super Proton Synchrotron). This was a further development
of the self-capacitance screen (right), also developed by
Stumpe at CERN in 1972.
E.A. Johnson of the Royal Radar Establishment, Malvern
described his work on capacitive touchscreens in a short
article published in 1965and then more fully—with
photographs and diagrams—in an article published in
1967.The applicability of touch technology for air traffic
control was described in an article published in 1968. Frank
Beck and Bent Stumpe, engineers from CERN, developed a
transparent touchscreen in the early 1970s, based on Stumpe's work at a television factory in the early
EA JOHNSON
The prototype[2] x-y mutual
capacitance touchscreen
E.A. Johnson(Right)
I –Brook (Volume 1, Issue 3) July – December 2017 |83
1960s. Then manufactured by CERN, it was put to use in 1973. A resistive touchscreen was developed by
American inventor George Samuel Hurst, who received US patent #3,911,215 on October 7, 1975.The
first version was produced in 1982.
In 1972, a group at the University of Illinois filed for a patent on an optical touchscreen that became a
standard part of the Magnavox Plato IV Student Terminal. Thousands were built for the PLATO IV
system. These touchscreens had a crossed array of 16 by 16 infrared position sensors, each composed of
an LED on one edge of the screen and a matched phototransistor on the other edge, all mounted in front
of a monochrome plasma display panel. This arrangement can sense any fingertip-sized opaque object in
close proximity to the screen. A similar touchscreen was used on the HP-150 starting in 1983; this was
one of the world's earliest commercial touchscreen computers. HP mounted their infrared transmitters and
receivers around the bezel of a 9" Sony Cathode Ray Tube (CRT).
In 1984, Fujitsu released a touch pad for the Micro 16, to deal with the complexity of kanji characters,
which were stored as tiled graphics. In 1985, Sega released the TerebiOekaki, also known as the Sega
Graphic Board, for the SG-1000 video game console and SC-3000 home computer. It consisted of a
plastic pen and a plastic board with a transparent window where the pen presses are detected. It was used
primarily for a drawing software application. A graphic touch tablet was released for the Sega AI
Computer in 1986.
Touch-sensitive Control-Display Units (CDUs) were evaluated for commercial aircraft flight decks in the
early 1980s. Initial research showed that a touch interface would reduce pilot workload as the crew could
then select waypoints, functions and actions, rather than be "head down" typing in latitudes, longitudes,
and waypoint codes on a keyboard. An effective integration of this technology was aimed at helping flight
crews maintain a high-level of situational awareness of all major aspects of the vehicle operations
including its flight path, the functioning of various aircraft systems, and moment-to-moment human
interactions.
TYPES
Resistive
A resistive touchscreen panel comprises
several layers, the most important of which
are two thin, transparent electrically resistive
layers separated by a thin space. These layers
face each other with a thin gap between. The
top screen (the screen that is touched) has a
coating on the underside surface of the
screen. Just beneath it is a similar resistive
layer on top of its substrate. One layer has
conductive connections along its sides, the
other along top and bottom. A voltage is applied to one layer, and sensed by the other. When an object,
such as a fingertip or stylus tip, presses down onto the outer surface, the two layers touch to become
connected at that point: The panel then behaves as a pair of voltage dividers, one axis at a time. By
rapidly switching between each layer, the position of a pressure on the screen can be read.
Resistive touch is used in restaurants, factories and hospitals due to its high resistance to liquids and
contaminants. A major benefit of resistive touch technology is its low cost. Additionally, as only
sufficient pressure is necessary for the touch to be sensed, they may be used with gloves on, or by using
I –Brook (Volume 1, Issue 3) July – December 2017 |84
anything rigid as a finger/stylus substitute. Disadvantages include the need to press down, and a risk of
damage by sharp objects. Resistive touchscreens also suffer from poorer contrast, due to having additional
reflections from the extra layers of material (separated by an air gap) placed over the screen. This is the
type of touchscreen used by Nintendo in the DS family, the 3DS family, and the Wii U GamePad.
Surface acoustic wave
Surface acoustic wave (SAW) technology
uses ultrasonic waves that pass over the
touchscreen panel. When the panel is
touched, a portion of the wave is absorbed.
This change in the ultrasonic waves registers
the position of the touch event and sends this
information to the controller for processing.
Surface acoustic wave touchscreen panels
can be damaged by outside elements.
Contaminants on the surface can also
interfere with the functionality of the
touchscreen.
Capacitive
A capacitive touchscreen panel consists of an insulator such
as glass, coated with a transparent conductor such as
indium tin oxide (ITO).[32] As the human body is also an
electrical conductor, touching the surface of the screen
results in a distortion of the screen's electrostatic field,
measurable as a change in capacitance. Different
technologies may be used to determine the location of the
touch. The location is then sent to the controller for processing.
Unlike a resistive touchscreen, one cannot use a capacitive touchscreen through most types of electrically
insulating material, such as gloves. This
disadvantage especially affects usability in
consumer electronics, such as touch tablet
PCs and capacitive smartphones in cold
weather. It can be overcome with a special
capacitive stylus, or a special-application
glove with an embroidered patch of
conductive thread passing through it and
contacting the user's fingertip.
The largest capacitive display manufacturers
continue to develop thinner and more accurate
touchscreens, with touchscreens for mobile
devices now being produced with 'in-cell' technology that eliminates a layer, such as Samsung's Super
AMOLED screens, by building the capacitors inside the display itself. This type of touchscreen reduces
the visible distance (within millimetres) between the user's finger and what the user is touching on the
Capacitive touchscreen of a mobile
phone
I –Brook (Volume 1, Issue 3) July – December 2017 |85
screen, creating a more direct contact with the content displayed and enabling taps and gestures to be
more responsive.
A simple parallel plate capacitor has two conductors separated by a dielectric layer. Most of the energy in
this system is concentrated directly between the plates. Some of the energy spills over into the area
outside the plates, and the electric field lines associated with this effect are called fringing fields. Part of
the challenge of making a practical capacitive sensor is to design a set of printed circuit traces which
direct fringing fields into an active sensing area accessible to a user. A parallel plate capacitor is not a
good choice for such a sensor pattern. Placing a finger near fringing electric fields adds conductive
surface area to the capacitive system. The additional charge storage capacity added by the finger is known
as finger capacitance, CF. The capacitance of the sensor without a finger present is denoted as CP in this
article, which stands for parasitic capacitance.
Surface capacitance
In this basic technology, only one side of the insulator is coated with a conductive layer. A small voltage
is applied to the layer, resulting in a uniform electrostatic field. When a conductor, such as a human
finger, touches the uncoated surface, a capacitor is dynamically formed. The sensor's controller can
determine the location of the touch indirectly from the change in the capacitance as measured from the
four corners of the panel. As it has no moving parts, it is moderately durable but has limited resolution, is
prone to false signals from parasitic capacitive coupling, and needs calibration during manufacture. It is
therefore most often used in simple applications such as
industrial controls and kiosks.
Projected capacitance
Schema of projected-capacitive touchscreen
Projected capacitive touch (PCT; also PCAP) technology is a
variant of capacitive touch technology. All PCT touch screens are
made up of a matrix of rows and columns of conductive material,
layered on sheets of glass. This can be done either by etching a
single conductive layer to form a grid pattern of electrodes, or by
etching two separate, perpendicular layers of conductive material
with parallel lines or tracks to form a grid. Voltage applied to this grid creates a uniform electrostatic
field, which can be measured. When a conductive object, such as a finger, comes into contact with a PCT
panel, it distorts the local electrostatic field at that point. This is measurable as a change in capacitance. If
a finger bridges the gap between two of the "tracks", the charge field is further interrupted and detected
by the controller. The capacitance can be changed and measured at
every individual point on the grid (intersection). Therefore, this system
is able to accurately track touches. Due to the top layer of a PCT being
glass, it is a more robust solution than less costly resistive touch
technology. Additionally, unlike traditional capacitive touch
technology, it is possible for a PCT system to sense a passive stylus or
gloved fingers. However, moisture on the surface of the panel, high
humidity, or collected dust can interfere with the performance of a
PCT system. There are two types of PCT: mutual capacitance and self-
capacitance.
Back side of a Multitouch Globe, based
on Projected Capacitive Touch (PCT)
technology
I –Brook (Volume 1, Issue 3) July – December 2017 |86
Mutual capacitance
This is a common PCT approach, which makes use of the fact that most conductive objects are able to
hold a charge if they are very close together. In mutual capacitive sensors, a capacitor is inherently
formed by the row trace and column trace at each intersection of the grid. A 16-by-14 array, for example,
would have 224 independent capacitors. A voltage is applied to the rows or columns. Bringing a finger or
conductive stylus close to the surface of the sensor changes the local electrostatic field which reduces the
mutual capacitance. The capacitance change at every individual point on the grid can be measured to
accurately determine the touch location by measuring the voltage in the other axis. Mutual capacitance
allows multi-touch operation where multiple fingers, palms or styli can be accurately tracked at the same
time.
Self-capacitance
Self-capacitance sensors can have the same X-Y grid as mutual capacitance sensors, but the columns and
rows operate independently. With self-capacitance, the
capacitive load of a finger is measured on each column or
row electrode by a current meter. This method produces a
stronger signal than mutual capacitance, but it is unable to
resolve accurately more than one finger, which results in
"ghosting", or misplaced location sensing.
Use of styli on capacitive screens
Capacitive touchscreens don't necessarily need to be operated
by a finger, but until recently the special styli required could
be quite expensive to purchase. The cost of this technology
has fallen greatly in recent years and capacitative styli are
now widely available for a nominal charge, and often given
away free with mobile accessories.
Infrared grid
Infrared sensors mounted around the display watch for a
user's touchscreen input on this PLATO V terminal in 1981. The monochromatic plasma display's
characteristic orange glow is illustrated.
An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs around the edges of
the screen to detect a disruption in the pattern of LED beams. These LED beams cross each other in
vertical and horizontal patterns. This helps the sensors pick up the exact location of the touch. A major
benefit of such a system is that it can detect
essentially any input including a finger,
gloved finger, stylus or pen. It is generally
used in outdoor applications and point of sale
systems which can not rely on a conductor
(such as a bare finger) to activate the
touchscreen. Unlike capacitive touchscreens,
infrared touchscreens do not require any
patterning on the glass which increases
durability and optical clarity of the overall
Schema of projected-capacitive
touchscreen
I –Brook (Volume 1, Issue 3) July – December 2017 |87
system. Infrared touchscreens are sensitive to dirt/dust that can interfere with the IR beams, and suffer
from parallax in curved surfaces and accidental press when the user hovers his/her finger over the screen
while searching for the item to be selected.
Infrared acrylic projection
A translucent acrylic sheet is used as a rear projection screen to display information. The edges of the
acrylic sheet are illuminated by infrared LEDs, and infrared cameras are focused on the back of the sheet.
Objects placed on the sheet are detectable by the cameras. When the sheet is touched by the user the
deformation results in leakage of infrared light, which peaks at the points of maximum pressure indicating
the user's touch location. Microsoft's PixelSense tables use this technology.
Optical imaging
Optical touchscreens are a relatively modern development in touchscreen technology, in which two or
more image sensors are placed around the edges (mostly the corners) of the screen. Infrared back lights
are placed in the camera's field of view on the other side of the screen. A touch shows up as a shadow and
each pair of cameras can then be pinpointed to locate the touch or even measure the size of the touching
object (see visual hull). This
technology is growing in
popularity, due to its
scalability, versatility, and
affordability, especially for
bigger units.
Dispersive signal technology
Introduced in 2002, by 3M, this
system uses sensors to detect
the piezoelectricity in the glass
that occurs due to a touch.
Complex algorithms then interpret this information and provide the actual location of the touch. The
technology claims to be unaffected by dust and other outside elements, including scratches. Since there is
no need for additional elements on screen, it also claims to provide excellent optical clarity. Also, since
mechanical vibrations are used to detect a touch event, any object can be used to generate these events,
including fingers and stylus. A downside is that after the initial touch the system cannot detect a
motionless finger.
Acoustic pulse recognition
The key to this technology is that a touch at any one position on the surface generates a sound wave in the
substrate which then produces a unique combined sound after being picked up by three or more tiny
transducers attached to the edges of the touchscreen. The sound is then digitized by the controller and to a
list of pre-recorded sounds for every position on the surface. The cursor position is instantly updated to
the touch location. A moving touch is tracked by rapid repetition of this process. Extraneous and ambient
sounds are ignored since they do not match any stored sound profile. The technology differs from other
attempts to recognize the position of touch with transducers or microphones in using a simple table look-
up method, rather than requiring powerful and expensive signal processing hardware to attempt to
calculate the touch location without any references. As with the dispersive signal technology system, a
I –Brook (Volume 1, Issue 3) July – December 2017 |88
motionless finger cannot be detected after the initial touch. However, for the same reason, the touch
recognition is not disrupted by any resting objects. The technology was created by SoundTouch Ltd in the
early 2000s, as described by the patent family EP1852772, and introduced to the market by Tyco
International's Elo division in 2006 as Acoustic Pulse Recognition. The touchscreen used by Elo is made
of ordinary glass, giving good durability and optical clarity. APR is usually able to function with
scratches and dust on the screen with good accuracy. The technology is also well suited to displays that
are physically larger.
Ergonomics and usage
Touchscreen accuracy
Users must be able to accurately select targets on touchscreens, and avoid accidental selection of adjacent
targets, to effectively use a touchscreen input device. The design of touchscreen interfaces must reflect
both technical capabilities of the system, ergonomics, cognitive psychology and human physiology.
Guidelines for touchscreen designs were first developed in the 1990s, based on early research and actual
use of older systems, so assume the use of contemporary sensing technology such as infrared grids. These
types of touchscreens are highly dependent on the size of the user's fingers, so their guidelines are less
relevant for the bulk of modern devices, using capacitive or resistive touch technology. From the mid-
2000s onward, makers of operating systems for smartphones have promulgated standards, but these vary
between manufacturers, and allow for significant variation in size based on technology changes, so are
unsuitable from a human factors perspective.
Much more important is the accuracy humans have in selecting targets with their finger or a pen stylus.
The accuracy of user selection varies by position on the screen. Users are most accurate at the center, less
so at the left and right edges, and much less accurate at the top and especially bottom edges. The R95
accuracy varies from 7 mm in the center, to 12 mm in the lower corners.Users are subconsciously aware
of this, and are also slightly slower, taking more time to select smaller targets, and any at the edges and
corners.
This inaccuracy is a result of parallax, visual acuity and the speed of the feedback loop between the eyes
and fingers. The precision of the human finger alone is much, much higher than this, so when assistive
technologies are provided such as on-screen magnifiers, users can move their finger (once in contact with
the screen) with precision as small as 0.1 mm.
Hand position, digit used and switching
Users of handheld and portable touchscreen devices hold them in a variety of ways, and routinely change
their method of holding and selection to suit the position and type of input. There are four basic types of
handheld interaction:
Holding at least in part with both hands, tapping with a single thumb
Holding with one hand, tapping with the finger (or rarely, thumb) of another hand
Holding the device in one hand, and tapping with the thumb from that hand
Holding with two hands and tapping with both thumbs
Use rates vary widely. While two-thumb tapping is encountered rarely (1-3%) for many general
interactions, it is used for 41% of typing interaction.
I –Brook (Volume 1, Issue 3) July – December 2017 |89
In addition, devices are often placed on surfaces (desks or tables) and tablets especially are used in stands.
The user may point, select or gesture in these cases with their finger or thumb, and also varies the use.
Combined with haptics
Touchscreens are often used with haptic response systems. A common example of this technology is the
vibratory feedback provided when a button on the touchscreen is tapped. Haptics are used to improve the
user's experience with touchscreens by providing simulated tactile feedback, and can be designed to react
immediately, partly countering on-screen response latency. Research from the University of Glasgow
Scotland [Brewster, Chohan, and Brown 2007 and more recently Hogan] demonstrates that sample users
reduce input errors (20%), increase input speed (20%), and lower their cognitive load (40%) when
touchscreens are combined with haptics or tactile feedback [vs. non-haptic touchscreens].
"Gorilla arm"
Extended use of gestural interfaces without the ability of the user to rest their arm is referred to as "gorilla
arm." It can result in fatigue, and even repetitive stress injury when routinely used in a work setting.
Certain early pen-based interfaces required the operator to work in this position for much of the work day.
Allowing the user to rest their hand or arm on the input device or a frame around it is a solution for this in
many contexts. This phenomenon is often cited as a prima facie example of what not to do in ergonomics.
Unsupported touchscreens are still fairly common in applications such as ATMs and data kiosks, but are
not an issue as the typical user only engages for brief and widely spaced periods.
Fingerprints
Fingerprints and smudges on a tablet computer touchscreen
Touchscreens can suffer from the problem of fingerprints on
the display. This can be mitigated by the use of materials with
optical coatings designed to reduce the visible effects of
fingerprint oils, or oleophobic coatings as most of the modern
smartphones, which lessen the actual amount of oil residue
(which includes alcohol), or by installing a matte-finish anti-
glare screen protector, which creates a slightly roughened
surface that does not easily retain smudges.
References
http://baanto.com/types-of-touch-screen-technologies
https://plnetworking.wordpress.com/advantages-disadvantages-of-touchscreen-technology/
http://baanto.com/touch-screen-technology-history
http://www.engineersgarage.com/articles/touchscreen-technology-working
https://en.wikipedia.org/wiki/Touchscreen#History
Fingerprints and smudges on a tablet
computer touchscreen
I –Brook (Volume 1, Issue 3) July – December 2017 |90
―VIRTUAL LAN TECHNOLOGY‖
Nayanika Saha
CSE – 4th
year
A Local Area Network (LAN) was originally defined as a network of computers located within the same
area. Today, Local Area Networks are defined as a single broadcast domain. This means that if a user
broadcasts information on his/her LAN, the broadcast will be received by every other user on the LAN.
Broadcasts are prevented from leaving a LAN by using a router. The disadvantage of this method is
routers usually take more time to process incoming data compared to a bridge or a switch. More
importantly, the formation of broadcast domains depends on the physical connection of the devices in the
network. Virtual Local Area Networks (VLAN's) were developed as an alternative solution to using
routers to contain broadcast traffic.VLAN's allow a network manager to logically segment a LAN into
different broadcast domains (see Figure2). Since this is a logical segmentation and not a physical one,
workstations do not have to be physically located together. Users on different floors of the same building,
or even in different buildings can now belong to the same LAN.
This two figures stands for physical view and logical view:
I –Brook (Volume 1, Issue 3) July – December 2017 |91
Types of VLAN
1) Layer 1 VLAN: Membership by Port Membership in a VLAN can be defined based on the
ports that belong to the VLAN. For example, in a bridge with four ports, ports 1, 2, and 4 belong
to VLAN 1 and port 3 belongs to VLAN 2
2) Higher Layer VLAN's
It is also possible to define VLAN membership based on applications or service, or any
combination thereof. For example, file transfer protocol (FTP) applications can be executed on one
VLAN and telnet applications on another VLAN.
Types of connection
1) Trunk Link
All the devices connected to a trunk link, including workstations, must be VLAN-aware. All
frames on a trunk link must have a special header attached. These special frames are called
tagged frames
2) Access Link
An access link connects a VLAN-unaware device to the port of a VLAN-aware bridge. All
frames on access links must be implicitly tagged (untagged) (see Figure8). The VLAN-unaware
device can be a LAN segment with VLAN-unaware workstations or it can be a number of LAN
segments containing VLAN-unaware devices (legacy LAN).
3) Hybrid Link
This is a combination of the previous two links. This is a link where both VLAN-aware and
VLAN-unaware devices are attached (see Figure9). A hybrid link can have both tagged and
untagged frames, but allthe frames for a specific VLAN must be either tagged or untagged.
Main advantages of VLAN are listed below.
• Broadcast Control: Broadcasts are required for the normal function of a network. Many protocols and
applications depend on broadcast
communication to function properly. A layer 2
switched network is in a single broadcast
domain and the broadcasts can reach the
network segments which are so far where a
particular broadcast has no scope and
consume available network bandwidth. A
layer 3 device (typically a Router) is used to
segment a broadcast domain.
If we segment a large LAN to smaller
VLANs we can reduce broadcast traffic as
I –Brook (Volume 1, Issue 3) July – December 2017 |92
each broadcast will be sent on to the relevant VLAN only.
• Security: VLANs provide enhanced network security. In a VLAN network environment, with multiple
broadcast domains, network administrators have control over each port and user. A malicious user can no
longer just plug their workstation into any switch port and sniff the network traffic using a packet sniffer.
The network administrator controls each port and whatever resources it is allowed to use. VLANs help to
restrict sensitive traffic originating from an enterprise department within itself.
• Cost: Segmenting a large VLAN to smaller VLANs is cheaper than creating a routed network with
routers because normally routers costlier than switches.
• Physical Layer Transparency: VLANs are transparent on the physical topology and medium over
which the network is connected.
Disadvantages of VLAN
I think, managment is a little more complex, nothing
out of this world, but, you need to make sure with
ports do you configured like access, and vlans
permitted on trunk ports. This one is only done on the
firs time, the you only need to permits vlans and
configure the access for machines who needs to
communicate.
The other thing is when you need to add a new vlan,
you need to configured on all the switches on your
networks, or configure VTP, a little more complex
too.
I –Brook (Volume 1, Issue 3) July – December 2017 |93
―GSM SECURITY AND ENCRYPTION‖
Neha Chowdhury
CSE – 4th year
What is GSM:-
If you are in Europe or Asia and using a mobile phone, then most probably you are using GSM technology in
your mobile phone.
GSM stands for Global System for mobile Communication. It is a digital cellular technology used for
transmitting mobile voice and data services.
The concept of GSM emerged from a cell-based mobile radi3o system at Bell Laboratories in the early
1970s.
GSM is the name of a standardization group established in 1982 to create a common European mobile
telephone standard.
GSM is the most widely accepted standard in telecommunications and it is implemented globally.
Why GSM:-
Listed below are the features of GSM that account for its popularity and wide acceptance.
Improved spectrum efficiency
International roaming
Low-cost mobile sets and base stations (BSs)
High-quality speech
Compatibility with Integrated Services Digital Network (ISDN) and other telephone company services
GSM – Architecture:-
GSM is the most secured cellular telecommunications system available today. GSM has its security methods
standardized. GSM maintains end-to-end security by retaining the confidentiality of calls and anonymity of
the GSM subscriber. Temporary identification numbers are assigned to the subscriber‘s number to maintain
the privacy of the user. The privacy of the communication is maintained by applying encryption algorithms
and frequency hopping that can be enabled using digital systems and signaling. This chapter gives an outline
of the security measures implemented for GSM subscribers. A GSM network comprises of many functional
units. These functions and interfaces are explained in this chapter. The GSM network can be broadly divided
into:
The Mobile Station (MS)
The Base Station Subsystem (BSS)
I –Brook (Volume 1, Issue 3) July – December 2017 |94
The Network Switching Subsystem (NSS)
The Operation Support Subsystem (OSS)
Given below is a simple pictorial view of the GSM architecture.The additional components of the GSM
architecture comprise of databases and messaging systems functions:
The following diagram shows the GSM network along with the added elements:
The MS and the BSS communicate across the Um interface. It is also known as the air interface or
the radio link. The BSS communicates with the Network Service Switching (NSS) center across
the A interface.
GSM Network Areas:-
In a GSM network, the following areas are defined:
Home Location Register (HLR) Visitor Location Register (VLR)
Equipment Identity Register (EIR)
Authentication Center (AuC)
SMS Serving Center (SMS SC)
Gateway MSC (GMSC)
Chargeback Center (CBC)
Transcoder and Adaptation Unit (TRAU)
I –Brook (Volume 1, Issue 3) July – December 2017 |95
GSM – Specification:
The requirements for different Personal Communication Services (PCS) systems differ for each PCS network.
Vital characteristics of the GSM specification are listed below:
Modulation:
Modulation is the process of transforming the input data into a suitable format for the transmission medium.
The transmitted data is demodulated back to its original form at the receiving end. The GSM uses Gaussian
Minimum Shift Keying (GMSK) modulation method.
Access Methods:
Radio spectrum being a limited resource that is consumed and divided among all the users, GSM devised a
combination of TDMA/FDMA as the method to divide the bandwidth among the users. In this process, the
FDMA part divides the frequency of the total 25 MHz bandwidth into 124 carrier frequencies of 200 kHz
bandwidth.
Transmission Rate: The total symbol rate for GSM at 1 bit per symbol in GMSK produces 270.833 K
symbols/second. The gross transmission rate of a timeslot is 22.8 Kbps.GSM is a digital system with an over-
the-air bit rate of 270 kbps.
Frequency Band: The uplink frequency range specified for GSM is 933 - 960 MHz (basic 900 MHz band only).
The downlink frequency band 890 - 915 MHz (basic 900 MHz band only).
Channel Spacing: Channel spacing indicates the spacing between adjacent carrier frequencies. For GSM, it is 200
kHz.
GSM - Addresses and Identifiers: GSM treats the users and the equipment in different ways. Phone numbers,
subscribers, and equipment identifiers are some of the known ones. There are many other identifiers that have
been well-defined, which are required for the subscriber‘s mobility management and for addressing the
remaining network elements. Vital addresses and identifiers that are used in GSM are addressed below.
International Mobile Station Equipment Identity (IMEI):-The International Mobile Station Equipment Identity (IMEI)
looks more like a serial number which distinctively identifies a mobile station internationally. This is allocated
by the equipment manufacturer and registered by the network operator, who stores it in the Entrepreneurs-in-
Residence (EIR). By means of IMEI, one recognizes obsolete, stolen, or non-functional equipment.Following
are the parts of IMEI:
Type Approval Code (TAC):- 6 decimal places, centrally assigned.
Final Assembly Code (FAC):- 6 decimal places, assigned by the manufacturer.
Serial Number (SNR):- 6 decimal places, assigned by the manufacturer.
I –Brook (Volume 1, Issue 3) July – December 2017 |96
Spare (SP):- 1 decimal place
International Mobile Subscriber Identity (IMSI): Every registered user has an original International Mobile
Subscriber Identity (IMSI) with a valid IMEI stored in their Subscriber Identity Module (SIM).IMSI
comprises of the following parts:
Mobile Country Code (MCC): 3 decimal places, internationally standardized.
Mobile Network Code (MNC): 2 decimal places, for unique identification of mobile network within the
country.
Mobile Subscriber ISDN Number (MSISDN):
The authentic telephone number of a mobile station is the Mobile Subscriber ISDN Number (MSISDN).
Based on the SIM, a mobile station can have many MSISDNs, as each subscriber is assigned with a
separate MSISDN to their SIM respectively.
Listed below is the structure followed by MSISDN categories, as they are defined based on international
ISDN number plan:
Country Code (CC) : Up to 3 decimal places.
National Destination Code (NDC):- Typically 2-3 decimal places.
Subscriber Number (SN):- Maximum 10 decimal places.
Mobile Station Roaming Number (MSRN): Mobile Station Roaming Number (MSRN) is an interim location
dependent ISDN number, assigned to a mobile station by a regionally responsible Visitor Location
Register (VLA). Using MSRN, the incoming calls are channelled to the MS. The MSRN has the same
structure as the MSISDN.
Country Code (CC): of the visited network.
National Destination Code (NDC): of the visited network.
Location Area Identity (LAI):
Within a PLMN, a Location Area identifies its own authentic Location Area Identity (LAI). The LAI
hierarchy is based on international standard and structured in a unique format as mentioned below:
Location Area Code (LAC): maximum 5 decimal places or maximum twice 8 bits coded in hexadecimal
(LAC < FFFF).
Temporary Mobile Subscriber Identity (TMSI):
I –Brook (Volume 1, Issue 3) July – December 2017 |97
Temporary Mobile Subscriber Identity (TMSI) can be assigned by the VLR, which is responsible for the
current location of a subscriber. The TMSI needs to have only local significance in the area handled by
the VLR. This is stored on the network side only in the VLR and is not passed to the Home Location
Register (HLR).Together with the current location area; the TMSI identifies a subscriber uniquely. It can
contain up to 4 × 8 bits.
Local Mobile Subscriber Identity (LMSI):
Each mobile station can be assigned with a Local Mobile Subscriber Identity (LMSI), which is an
original key, by the VLR. This key can be used as the auxiliary searching key for each mobile station
within its region. It can also help accelerate the database access.
Cell Identifier (CI):
Using Cell Identifier (CI) (maximum 2 × 8) bits, the individual cells that are within an LA can be
recognized. When the Global Cell Identity (LAI + CI) calls are combined, then it is uniquely defined.
Mobile Station Authentication:
The GSM network authenticates the identity of the subscriber through the use of a challenge-response
mechanism. A 128-bit Random Number (RAND) is sent to the MS. The MS computes the 32-bit Signed
Response (SRES) based on the encryption of the RAND with the authentication algorithm (A3) using
the individual subscriber authentication key (Ki). Upon receiving the SRES from the subscriber, the
GSM network repeats the calculation to verify the identity of the subscriber.
The individual subscriber authentication key (Ki) is never transmitted over the radio channel, as it is
present in the subscriber's SIM, as well as the AUC, HLR, and VLR databases. If the received SRES
agrees with the calculated value, the MS has been successfully authenticated and may continue. If the
values do not match, the connection is terminated and an authentication failure is indicated to the MS.
Signaling and Data Confidentiality: The SIM contains the ciphering key generating algorithm (A8) that is used
to produce the 64-bit ciphering key (Kc). This key is computed by applying the same random number
(RAND) used in the authentication process to ciphering key generating algorithm (A8) with the
individual subscriber authentication key (Ki).
Subscriber Identity Confidentiality: To ensure subscriber identity confidentiality, the Temporary Mobile
Subscriber Identity (TMSI) is used. Once the authentication and encryption procedures are done, the
TMSI is sent to the mobile station. After the receipt, the mobile station responds. The TMSI is valid in
the location area in which it was issued.
I –Brook (Volume 1, Issue 3) July – December 2017 |98
Telephony Service: These services can be charged on per call basis. The call initiator has to pay the charges,
and the incoming calls are nowadays free. A customer can be charged based on different parameters
such as:
International call or long distance call.
Local call.
Call made during peak hours.
Call made during night time.
SMS Service:
Most of the service providers charge their customer's SMS services based on the number of text
messages sent. There are other prime SMS services available where service providers charge more than
normal SMS charge. These services are being availed in collaboration of Television Networks or Radio
Networks to demand SMS from the audiences.
Most of the time, the charges are paid by the SMS sender but for some services like stocks and share
prices, mobile banking facilities, and leisure booking services, etc. the recipient of the SMS has to pay
for the service
GPRS Services: Using GPRS service, you can browse, play games on the Internet, and download movies.
So a service provider will charge you based on the data uploaded as well as data downloaded on your
mobile phone. These charges will be based on per Kilo Byte data downloaded/uploaded.
Additional parameter could be a QoS provided to you. If you want to watch a movie, then a low QoS
may work because some data loss may be acceptable, but if you are downloading a zip file, then a single
byte loss will corrupt your complete downloaded file. Another parameter could be peak and off peak
time to download a data file or to browse the Internet.
Supplementary Services:
Most of the supplementary services are being provided based on monthly rental or absolutely free. For
example, call waiting, call forwarding, calling number identification, and call on hold are available at
zero cost.Call barring is a service, which service providers use just to recover their dues, etc., otherwise
this service is not being used by any subscriber.Call conferencing service is a form of simple telephone
call where the customers are charged for multiple calls made at a time.
GSM – Operations: Once a Mobile Station initiates a call, a series of events takes place. Analyzing these
events can give an insight into the operation of the GSM system.
I –Brook (Volume 1, Issue 3) July – December 2017 |99
Mobile Phone to Public Switched Telephone Network (PSTN):
When a mobile subscriber makes a call to a PSTN telephone subscriber, the following sequence of
events takes place:
The MSC/VLR receives the message of a call request.
The MSC/VLR checks if the mobile station is authorized to access the network. If so, the mobile
station is activated. If the mobile station is not authorized, then the service will be denied.
MSC/VLR analyzes the number and initiates a call setup with the PSTN.
MSC/VLR asks the corresponding BSC to allocate a traffic channel (a radio channel and a time
slot).
PSTN to Mobile Phone:
When a PSTN subscriber calls a mobile station, the following sequence of events takes place:
The Gateway MSC receives the call and queries the HLR for the information needed to route the
call to the serving MSC/VLR.
The GMSC routes the call to the MSC/VLR.
The MSC checks the VLR for the location area of the MS.
The MSC contacts the MS via the BSC through a broadcast message, that is, through a paging
request.
GSM - Protocol Stack: GSM
architecture is a layered
model that is designed to
allow communications
between two different
systems. The lower layers
assure the services of the
upper-layer protocols. Each
layer passes suitable
notifications to ensure the
transmitted data has been
I –Brook (Volume 1, Issue 3) July – December 2017 |100
formatted, transmitted, and received accurately.The GMS protocol stacks diagram is shown below:
MS Protocols
Based on the interface, the GSM signaling protocol is assembled into three general layers:
Layer 1: The physical layer. It uses the channel structures over the air interface.
Layer 2: The data-link layer. Across the Um interface, the data-link layer is a modified version
of the Link access protocol for the D channel (LAP-D) protocol used in ISDN, called Link
access protocol on the Dm channel (LAP-Dm). Across the A interface, the Message Transfer
Part (MTP), Layer 2 of SS7 is used.
Layer 3 : GSM signalling protocol‘s third layer is divided into three sub layers:
Happy Solving!!
1. Atom Egoyan will be honoured with the lifetime achievement award at the International Film
Festival of India (IFFI 2017). He hails from which country?
[A] Canada
[B] Morocco
[C] New Zealand
[D] China
2. Who will be honoured with the 2017 Indira Gandhi Prize for Peace, Disarmament and
Development?
[A] RaghuramRajan
[B] MamataBanerjee
[C] ManmohanSingh
[D] Pranab Mukherjee
3. Which city is hosting the 2017 Women‘s Youth World Boxing Championship (YWBC)?
[A] New Delhi
[B] Pune
[C] Kochi
[D] Guwahati
I –Brook (Volume 1, Issue 3) July – December 2017 |101
―Digital Watermarking Applications‖
Parasmita Gupta
CSE – 4th
year
The advancement of the Internet has resulted in many new opportunities for the creation and delivery of
content indigital form. Applications include electronic advertising, real-time video and audio delivery,
digital repositories and libraries, and Web publishing. But the important question that arises in these
applications is the data security. It has been observed that current copyright laws are not sufficient for
dealing with digital data. Hence the protection and enforcement of intellectual property rights for digital
media has become a crucial issue. This has led to an interest towards developing new copy deterrence and
protection mechanisms. One such effort that has been attracting increasing interest is based on digital
watermarking techniques.
Digital Watermarking is an adaptation of the commonly used and well known paper watermarks to the
digital world.
What is digital watermarking?
Digital Watermarking describes methods and technologies that hide information, for example a number or
text, in digital media, such as images, video or audio. The embedding takes place by manipulating the
content of the digital data, which means the information is not embedded in the frame around the data.
The hiding process has to be
such that the modifications
of the media are
imperceptible. For images,
this means that the
modifications of the pixel
values have to be invisible.
Furthermore, the watermark
must be either robust or
fragile, depending on the
application. By "robust", we mean the capability of the watermark to resist manipulations of the media,
such as lossy compression (where compressing data and then decompressing it retrieves data that may
well be different from the original, but is close enough to be useful in some way), scaling, and cropping,
among others. In some cases, the watermark may need to be fragile. "Fragile" means that the watermark
should not resist tampering, or would resist only up to a certain, predetermined extent.
What is digital video watermarking?
Digital video watermarking can be achieved by either applying still image technology to each film
frame or using dedicated methods that exploit inherent features of the video sequence.
What is watermarking used for?
The first applications that came to mind were related to copyright protection of digital media. In the
past, duplicating artwork was quite complicated and required a high level of expertise for the counterfeit
to look like the original. However, in the digital world, this is not true. Today, it is possible for almost
anyone to duplicate or manipulate digital data, while not losing data quality. Similar to a painter's
I –Brook (Volume 1, Issue 3) July – December 2017 |102
signature or monogram, today's artists can copyright their work by hiding their name within the image.
Hence, the embedded watermark allows identification of the owner of the work. It is clear that this
concept is also applicable to other media, such as digital video and audio. Currently, the unauthorized
distribution of digital audio and video over the Internet is a big problem. In this scenario, digital
watermarking may be useful to set up controlled audio distribution and to provide efficient means for
copyright protection, usually in collaboration with international registration bodies.
Are there any other applications where watermarking may be used?
There are a number of possible applications for digital watermarking
technologies and this number is increasing rapidly. For example, in
the field of data security, watermarks may be used for certification,
authentication, and conditional access. Certification is an important
issue for official documents, such as identity cards or passports.
On the left is an example of a protected identity card. The identity
number "123456789" is written in clear text on the card and hidden as
a digital watermark in the identity photo. Therefore, switching or
manipulating the identity photo will be detected.
Digital watermarking also allows to link information on documents.
That means that key information is written twice on the document.
For instance, the name of a passport owner is normally printed in
clear text. But it would also be hidden as an invisible watermark in
the passport photo. If anyone tries to tamper with the passport by
replacing the photo, it would be possible to detect the change by
scanning the passport and verifying the name hidden in the photo.
The picture on the left shows a printing machine from Intercard for
various types of plastic cards (Courtesy of Intercard, Switzerland).
Tampering with images
Another application is the
authentication of image
content. The goal of this
application is to detect any
alterations and modifications
made to an image.
The three pictures below
illustrate this application.
Image (a) shows an original
photo of a car that has been
protected with a
watermarking technology. In photo (b), the same picture is shown, but with a small modification: the
numbers on the license plate have been changed. Image (c) shows the photo after running the digital
watermark detection program on the tampered photo. The tampered areas are indicated in white. We can
clearly see that the detected area corresponds to the modifications applied to the original photo.
I –Brook (Volume 1, Issue 3) July – December 2017 |103
Invisible marking on blank paper
Digital watermarks can also be adapted to mark white paper with the
goal of authenticating the originator, verify the authenticity of the
document content, and date the document. Such applications are
especially of interest for official documents, such as contracts. For
example, the digital watermark can be used to embed the name of the
lawyer or other key monetary amounts. In the event of a dispute, the
digital watermark is then read, allowing authentication of key
information in the contract. The genuine processes to invisibly mark
white blank paper using regular visible ink. This patented technology
is now known as Cryptoglyph.
Digital Media Management
Besides applications in the fields of copyright protection, authentication and security, digital
watermarks can also serve as invisible labels and content links. For example, photo development
laboratories may insert a watermark into the picture to link the print to its negative. To find the negative
for a given print, simply scan the print and extract the information about the negative. In another
scenario, digital watermarks may also be used as a geometrical reference, which may be useful for
programs such as optical character recognition (OCR) software. The embedded calibration watermark
may improve the detection reliability of the OCR software since it allows the determination of
translation, rotation, and scaling.
Where is the technology headed next?
An exhaustive list of digital watermarking applications is of course impossible. However, it is
interesting to note the increasing interest in fragile watermarking technologies. Especially promising are
applications related to copy protection of printed media. Examples here include the protection of bills
with digital watermarks. Various companies have projects in this direction and it is very likely that fully
functioning solutions will soon be available.
Protecting digital media data
The media security group of Fraunhofer SIT with numerous technologies for protecting digital media
data. Digital watermarking being the main focus of the group. Digital Watermarking embeds arbitrary
information in digital media such as audio, video, images and eBooks. This is achieved through
imperceptible, systematic changes to the media data. Security and confidentiality of the embedded data
are thereby guaranteed by a secret key. Watermarks can be configured in such a way that they are robust
against alterations of their carrier medium. Such changes could encompass (and are not limited to) format
changes, analogue-digital-conversion, scaling or cropping.
The major advantage of watermarking is that a watermark medium is still a medium o the same type and
that can do everything with a watermark medium that one could do with an unwatermarked one: They are
still playable and copy able. Digital watermarking thus does not restrict usage - only abuse becomes
detectable and traceable.
An incentive to stay honest
With the Internet protection of copy rights and prevention of illegal distribution becomes ever more
important. The holds true for detection manipulated or forged digital media. Digital watermarking can be
used for example be used to protect copyrights by embedding information about the author or copyright
holder into the medium. Images can thus carry hidden information about the photographer or the photo
I –Brook (Volume 1, Issue 3) July – December 2017 |104
agency. But also every single copy of the same music file can be watermarked to distinguish and trace
individual users. This traceability is what protects from illegal distribution and gives users an incentive to
stay honest.
Tracing manipulations
Significant changes to a medium can destroy or damage an embedded watermarks. Integrity watermarks
use this property to detect (unwanted) changes to a medium. They have been developed in such a way that
they survive allowed changes (like format conversions) and still can signal manipulations of the medium's
content.
Broad media security competence
Multimedia security can only be guaranteed with a holistic security concept. Therefore our security
competence pairs digital watermarking with supporting IT-security-technologies like encryption, digital
signatures and DRM standards.
Types of watermark
Visible watermarks: Visible watermarks are an extension of the concept of logos. Such watermarks
are applicable to images only. These logos are inlaid into the image but they are transparent. Such
watermarks cannot be removed by cropping the center part of the image. Further, such watermarks are
protected against attacks such as statistical
analysis.
The drawbacks of visible watermarks are
degrading the quality of image and detection
by visual means only. Thus, it is not possible
to detect them by dedicated programs or
devices. Such watermarks have applications
in maps, graphics and software user interface.
Invisible watermark: Invisible watermark is
hidden in the content. It can be detected by an
authorized agency only. Such watermarks are used for content and/or author authentication and for
detecting unauthorized copier.
Public watermark: Such a watermark can be read or retrieved by anyone using the specialized
algorithm. In this sense, public watermarks are not secure. However, public watermarks are useful for
carrying IPR information. They are good alternatives to labels.
Fragile watermark: Fragile watermarks are also known as tamper-proof watermarks. Such watermarks
are destroyed by data manipulation.
Private Watermark: Private watermarks are also known as secure watermarks. To read or retrieve such
a watermark, it is necessary to have the secret key.
Perceptual watermarks: A perceptual watermark exploits the aspects of human sensory system to
provide invisible yet robust watermark. Such watermarks are also known as transparent watermarks that
provide extremely high quality contents.
I –Brook (Volume 1, Issue 3) July – December 2017 |105
Bit-stream watermarking: The term is sometimes used for watermarking of compressed data such as
video.
Text document watermarking
Text document is a discrete information source. In discrete sources, contents cannot be modified. Thus,
generic watermarking schemes are not applicable. The approaches for text watermarking are hiding
watermark information in semantics and hiding watermark in text format.
In semantic-based watermarking, the text is designed around the message to be hidden. Thus, misleading
information covers watermark information. Such techniques defy scientific approach.
By text format, we mean layout and appearance. Commonly used techniques to hide watermark
information are line shift coding, word shift coding and feature coding.
In line shift coding, single lines of the document are shifted upward or downward in very small amounts.
The watermark information is encoded in the way lines are shifted upward or downward. Watermark
recovery is simple because a line space in normal text is uniform. In word shift coding, words are shifted
horizontally in order to modify
the spacing between consecutive
words. While detecting the
watermark, the original word
spacing data is required because
normally word spacing is
variable.
In feature coding, feature of some
characters are modified. In a
typical case, the length of end
lines to characters like b, d, h are
modified. While detecting the
watermark, the original lengths
are known.
The formatted text method of
watermarking can be defeated easily by retyping the whole text using a new character font. The retyping
can be done manually or using automated 'optical character recognition' (OCR) unit. The OCR-based
techniques are not perfect and require human supervision.
In general, such watermark removal methods are expensive. For text watermarking, the goal is to make
watermark removal expensive and encourage copyrighted text. Thus, the above methods are robust
enough to resist printing and consecutive photocopying of up to 10th generation.
Software protection
Software is a discrete information source. It is not allowed either to add or delete even a single bit to
software. Thus, watermarking technique is not suitable for copyright protection.
The basic objective of a software protection system is to ensure that the software can be distributed
openly in protected (encrypted) form but can only be used within a trusted hardware system. Such a
system has provision to process owner's license restrictions and protect software as well.
I –Brook (Volume 1, Issue 3) July – December 2017 |106
A user has to first obtain the license that contains information about accessing the software
and decrypting key. A user may be allowed access to certain portion of software for a defined period only.
After seeking a license, a user can download the encrypted software over the Internet. Alternatively, the
distributor can also send the software.
A trusted hardware is a secure hardware. It contains embedded authentication software. Thus, a user is
required to present secret key before access is granted. A simple low cost solution is to use smart card in
which the secret key may be stored.
A trusted hardware must also ensure that the licensed software is also protected against tampering/piracy.
Executable software is aware of access control mechanism. Such software can interrogate the mechanism
to determine whether a particular feature is allowed by the license controlling the software.
To ensure a long period of protection, it is essential that the secret information should be minimal. System
security depends on storing the private decryption key in a special hardware.
Watermarking: Watermarking is also a sub-discipline of information hiding. Watermarking is the
process of embedding secret and robust identifiers inside audio-visual content. Thus, the watermarking
process is generally applicable to waveform type of information sources.
The purpose of watermarking is to establish the copyright of the content creator. In this sense, watermarks
are also known as hidden copyright messages.
Watermarking secures the content. Thus, any attempt to modify the content can be easily detected.
Watermarking can trace the path followed by content in a distribution chain. This helps in tracing
malicious users.
By detecting watermarks embedded in the content, it is possible to authenticate genuineness.
Label: This is readable public information added to content for IPR protection. It conveys ownership of
content, indexing and authenticity. A label does not modify the content. Digital signature is an example of
a label.
A label along with valid certification and cryptographic keys allows verification of the origin and the
integrity of the content.
It is impossible to prevent removing or replacing the label from the content because they are separated
from the content. However, label generally, offers the following functionality:
Authentication of origin of content.
Strict integrity of the bit stream.
Integrity of identification numbers and IPR data.
Integrity of the meaning of the content.
Finger printing:
It is a hidden serial number embedded in content. It helps in identifying copyright violators.
I –Brook (Volume 1, Issue 3) July – December 2017 |107
Conclusion
Legitimate businesses and webmasters have nothing to fear from copyright law or the new wave of on-
line enforcement technology found in digital watermarks and tracking services. By using audio files and
images only when they have obtained permission of the copyright owner or the appropriate licensing
agency, webmasters should be free to continue making their sites audio visually appealing.
Scrupulous webmasters, however, should not be lulled into a false sense of security in the age of digital
watermarks. While a webmaster would be wise to examine images and sound files for watermarks before
incorporating them on any web site, the absence of a watermark does not necessarily mean that a file is
unprotected by copyright and is therefore available for use without liability. Not only might a digital
watermark have disappeared through editing or been stripped before it arrives on a webmaster's desktop,
but the technology is too new to apply to a significant number of pre-existing images and audio files. Just
as an author is not required to affix a copyright notice on the hard copies of his work in order to gain
protection from the copyright laws, use of a digital watermark surely is a voluntary act -- and those who
do not use it will not forfeit their intellectual property rights.
In addition, webmasters should remain aware that a significant portion of content on the World Wide
Web -- plain old text -- may be protected by copyright even though it cannot be imbedded with a digital
watermark. Copying magazine articles without permission of the copyright owner can be just as
significant a copyright violation as copying photographs from the same magazine.
If anything, law-abiding webmasters should welcome digital watermarks and tracking. While Internet
scofflaws have been stealing copyrighted works by scanning images, right-clicking on icons and lifting
music from commercial CDs, webmasters who did not want to risk their businesses always have ensured
that they used royalty-free or works in the public domain or obtained permission of the copyright owner.
If nothing else, digital watermarks will deter illegal copying, leveling the playing field for all webmasters.
"Egg"stacy...
There is a building of 100 floors
-If an egg drops from the Nth floor or above it will break.
-If it’s dropped from any floor below, it will not break.
You’re given 2 eggs.
Find N..
How many drops you need to make?
What strategy should you adopt to minimize
the number egg drops it takes to find the
solution?
I –Brook (Volume 1, Issue 3) July – December 2017 |108
―iOS-Mobile operating system by Apple‖ Puja Mishra
CSE - 4th
year
iOS (previously named iPhone OS) is an operating system for mobile devices, made and sold by Apple
Inc. It is the mobile operating system of the iPhone , the iPod Touch, the iPad, Apple TV and similar
devices. iOS was originally called the iPhone OS but was renamed in 2010 to reflect the operating
system‘s evolving support for additional Apple devices.
History
iOS was released in 2007 simply known as running a version of OS X for the first generation iPhone .
Apple said on January 9, 2007 at a conference that there will be a new product, the
iPhone , and it would have a "revolutionary" operation system.
On March 6, 2008, Apple renamed OS X to iPhone OS
following the release of the iPhone software development
kit.
On July 11, 2008, along with the release of the iPhone
3G , Apple released iPhone OS 2.0, which introduced the
App Store.
In June 2009, Apple released iPhone OS 3.0, which was
released along with the iPhone 3GS . It was a minor
upgrade to iPhone OS 2.0 except it could do a few new
things. iPhone OS 3.0 was available for the first iPhone
and iPod Touch, but not all features were supported on those devices. The last version to support the first
iPhone and iPod Touch was 3.1.3. It was later available for the iPad when it was released (as version 3.2).
In September 2009, Apple renamed "iPhone OS" to "iOS". The trademark "IOS" has been used by the
tech company Cisco, so Apple licensed the trademark in order to avoid conflicts.
In mid-2010, Apple also released a significant update, iOS 4.0 , which added the ability of multitasking,
the option to have a wallpaper for the home screen and gives the user the ability to run several apps at the
same time, while the amount of apps are affected by the device's RAM. Not only that, but it also had a
more polished design and was the first version of iOS to be available for the iPod Touch for free.
Unfortunately, the iPhone 3G and iPod Touch (2nd generation) have limited features, meaning that they
can't multitask or have a wallpaper for the home screen.
In October 2011, Apple released iOS 5 , which introduced many new features such as the notifications
pull-down bar, a free messaging service called iMessage, iCloud , and more. A voice assistant named Siri
was also released to the iPhone 4S .
iOS 6 was released on September 19, 2012 with even more features. Siri was released to the iPad (3rd
generation) ,iPod Touch (5th generation) , and the iPhone 5 . YouTube was removed and a YouTube app
was added to the App Store. Google Maps was also removed and replaced with Apple Maps.
On September 18, 2013, iOS 7 was released with a new look and many features, and a new feature called
"Control Center" where you can control basic settings, music, AirPlay, brightness, flashlight, and more.
On June 2, 2014, Apple announced iOS 8 at their annual Worldwide Developers Conference . It has
several new features, such as a new app called Health, a feature called QuickType, which predicts which
words you will type, and several other features.[10] It was officially released to everyone on September
17, 2014. One of the most important software (app) in iOS is the App Store. The App Store is an
I –Brook (Volume 1, Issue 3) July – December 2017 |109
electronic market which gives you the ability to buy "apps", small user interface applications. By January
2013, Apple confirmed to have more than 800,000 applications in the app store.
I –Brook (Volume 1, Issue 3) July – December 2017 |110
―DIGITAL SIGNATURE‖
Rohit Shaw
CSE - 4th
year
Signature has been used to establish identity and authenticate messages . A valid digital signature gives a
recipient reason to believe that the message was created by a known sender (authentication), that the
sender cannot deny having sent the message (non-repudiation), and that the message was not altered in
transit (integrity). Digital signatures are commonly used for software distribution, financial
transactions, contract management software, and in other cases where it is important to detect forgery or
tampering.
How Digital Signature Works ?
Digital signature is an application of cryptography it is based on three Algorithms.
* A key generation algorithm that selects a private key uniformly at random from a set of possible private
keys. Each private key has its corresponding public key.
* A signing algorithm that, given a message and a private key, produces a signature.
* A signature verifying algorithm that, given the message, public key and signature, either accepts or
rejects the message's claim to authenticity.
I –Brook (Volume 1, Issue 3) July – December 2017 |111
Working process
G (key-generator) generates a public key, pk, and a corresponding private key, sk.
S (signing) returns a tag, t, on the inputs: the private key, sk, and a string, x.
V (verifying) outputs accepted or rejected on the inputs: the public key, pk, a string, x, and a tag, t.
Reasons for applying digital signature to communication:
AUTHENTICATION
Although messages may often include information about the entity sending a message, that information
may not be accurate. Digital signatures can be used to authenticate the source of messages. When
ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the
message was sent by that user.
INTEGRITY
In many scenarios, the sender and receiver of a message may have a need for confidence that the message
has not been altered during transmission. Although encryption hides the contents of a message,
NON-REPUDIATION
By this property, an entity that has signed some information cannot at a later time deny having signed it.
Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.
ADVANTAGES OF DIGITAL SIGNATURE OVER INK ON PAPER‘S SIGNATURE
An ink signature could be replicated from one document to another by copying the image manually or
digitally, but to have credible signature copies that can resist some scrutiny is a significant manual or
technical skill, and to produce ink signature copies that resist professional scrutiny is very difficult.
The word calligraphy comes from two Greek words stuck together, kallos, meaning
"beauty," and graphein, meaning "to write" — literally "beautiful writing." In
the days before printing was invented, all books and documents were written by
hand using calligraphy, the most famous examples being the
bibles written by medieval monks.
I –Brook (Volume 1, Issue 3) July – December 2017 |112
―FINGERPRINT RECOGNITION TECHNOLOGY‖
Sagnik Sen
CSE - 4th year
What is fingerprint recognition technology?
Fingerprint identification is one of the most well-
known and publicized biometrics. Because of their
uniqueness and consistency over time, fingerprints
have been used for identification for over a century,
more recently becoming automated (i.e. a biometric)
due to advancements in computing capabilities.
Fingerprint identification is popular because of the
inherent ease in acquisition, the numerous sources (ten
fingers) available for collection, and their established
use and collections by law enforcement and
immigration.
Methods involved in fingerprint recognition technology
Histogram equalization
Histogram equalization is a general process used to
enhance the contrast of images by transforming its
intensity values . As a secondary result, it can amplify the
noise producing worse results than the original image for
certain fingerprints. Therefore, instead of using the
histogram equalization which affects the whole image,
CLAHE is applied to enhance the contrast of small tiles
and to combine the neighboring tiles in an image by using
bilinear interpolation, which eliminates the artificially
induced boundaries.
Binarization
The operation that converts a grayscale image into binary image is known as binarization by computing
the mean value of each 32-by-32 input block matrix and transferring the pixel value to 1 if larger than the
mean or to 0 if smaller. We carried out the binarization process using the following an adaptive threshold.
Thinning
During this stage, the characterization of each feature is carried out by determining the value of each
pixel. Some techniques exist based on thinning the pixel neighborhood having a maximum value initially
and filtered in the final step in order to eliminate the false lonely points and breaks; an algorithm is
presented which eliminates the false information by slide neighborhood processing in a first step followed
by thinning without any additional filtering. Then, the fingerprint image is separated from the background
and local minutiae is located on the binary thinned image.
I –Brook (Volume 1, Issue 3) July – December 2017 |113
Applications of fingerprint recognition technology
Biometric security
Biometric Security As connectivity continues to spread
across the globe, it is clear that old security methods are
simply not strong enough to protect what‘s most
important. Thankfully, biometric technology is more
accessible than ever before, ready to bring enhanced
security and greater convenience to whatever needs
protection
Mobile biometrics
Mobile Biometrics –Click here to see the mobile biometrics solutions directory– What is mobile
biometric technology? Mobile biometrics solutions live at the intersection of connectivity and identity.
They incorporate either one or many biometric modalities for authentication or identification purposes,
and take advantage of smartphones, tablets, other types of handhelds, wearable technology, and Internet
of Things.
Time and attendance
Biometric Time and Attendance –Click here to see the biometric time
and attendance solutions directory– What is biometric time and
attendance? Biometric time and attendance solutions exist to keep track
of who is where and when they‘re there. In its most basic form, time
and attendance tracking is a schedule, of workers, volunteers.
Advantages of fingerprint recognition technology
It requires low maintenance cost
It saves time of the individual. This electronic procedure
completes in seconds therefore prevents the formation of
long queues and wastage of unnecessary time.
Users get fastest and easiest security using this device.
No other software is required other than original fingerprint
device.
This device ensures privacy more than traditional security
methods of PIN codes or swipe cards.
This is a portable and affordable device that can be placed
anywhere.
Disadvantages of fingerprint recognition technology
Distortion
Fingerprints may be distorted and unreadable or unidentifiable if the person's fingertip has dirt on it, or if
the finger is twisted during the process of fingerprinting. In an ink fingerprint, twisting could cause the
I –Brook (Volume 1, Issue 3) July – December 2017 |114
ink to blur, distorting the shape of the fingerprint and potentially making it unreadable. If there is dirt on a
person's fingertip, this can mar an ink fingerprint or the image captured by a digital fingerprint scanner.
Hygiene
Diseases and germs are commonly spread by the hands and fingertips. In order to prevent germs and
viruses transferring from one person's fingers to another, it is important to practice good hand washing
and hygiene techniques. Digital fingerprint scanners typically use a glass surface upon which each
successive individual presses a fingertip or fingertips. In busy places such as airport border control, use of
the same glass surface by many individuals every day may constitute a hygiene hazard.
Damaged prints
Although fingerprints do not naturally change over the course of a person's lifetime, it is possible for
fingerprints to become damaged to the point where they are not useful for identification. Injuries, trauma,
burns or deliberate injury to the fingertips can all cause a person's fingerprints to become different,
unreadable or even eliminated.
Conclusion
A new method has been proposed for the estimation of a high resolution directional field of fingerprints.
This method computes the local ridge orientation in each pixel location, and the associated coherence,
which provides a measure of its reliability. By decoupling the size and shape of the smoothing window
from the block size that defines the resolution of the estimate, the proposed method combines an
improved quality of directional field estimates, better noise suppression, and low computational
complexity. Furthermore, a very efficient algorithm has been proposed to consistently extract all singular
points and their orientations from this high-resolution directional field. The algorithm provides a binary
decision without using thresholds, and is implemented efficiently in small 2-dimensional filters.
1. No Two Fingerprints Are Alike
2. They Develop Early In Life
3. Some Materials Don’t Accept Fingerprints
I –Brook (Volume 1, Issue 3) July – December 2017 |115
―GAIT RECOGNITION‖
Saket Kumar
CSE - 4th
year
Introduction:
People often feel that they can identify a familiar person from afar simply by recognizing the way the
person walks. This common experience, combined with recent interest biometrics, has lead to the
development of gait recognition as a form of biometric identification.
As a biometric, gait has several attractive properties. Acquisition of images portraying an individual‘s
gait can be done easily in public areas, with simple instrumentation, and does not require the cooperation
or even awareness of the individual under observation . In fact, it seems that it is possibility that a subject
may not be aware of the surveillance and identification that raises public concerns about gait biometrics
There are also several confounding properties of gait as a biometric. Unlike finger prints, we do not
know the extent to which an individual‘s that cause variations in gait, including footwear, terrain , fatigue,
and injury.
Gait and Gait Recognition:
Gait can be defined to be the coordinated , cyclic combination of movements that result in human
locomotion. The movements are coordinated in the sense that they must occur with a specific temporal
pattern for the gait to occur. The movements ina a gait repeat as a walker cycles between steps with
alternating feet. It is both the coordinated and cyclic nature of the motion that makes gait a unique
phenomenon.
Human perception of gait:
There are three important properties in the human perception of gaits.
(i) Frequency entrainment- the various components of the gait must share a common frequency.
(ii) Phase locking- the phase relationship among the components of the gait remain approximately
constant. The lock varies for different types of locomotion such a walking versus running.
(iii) Physical plausibility-the motion must be physically plausible human motion.
Gait simulation and analysis figure
I –Brook (Volume 1, Issue 3) July – December 2017 |116
As shown in the figure above there are motions at different frequencies within a gait. However, the gait
has a fundamental frequency that corresponds to the complete cycle. Other frequencies are multiples of
the fundamental. This is frequency entrainment. It is not possible to walk with component motions at
arbitrary frequencies.
The figure shows the stylized body and legs showing sources of different frequencies in a synthesized
gait: (a) the oscillation of swinging limb repeats periodically, e.g., left foot fall to left fall, (b) the
silhouette of a body repeats at twice that frequency ,i.e., step to step, and (c) the pendulum motion of
limbs has vertical motion at twice of the limbs horizontal motion.
Potential for gait as a biometric
The use of gait as a biometric for human identification is still young when compared to methods
that use voice, finger prints, or faces. Thus, it is not yet clear how useful gait is for biometrics.
The figure above shows the typical system for testing performance of gait recognition and other
biometric systems.
Two broad approaches to evaluation have emerged. The first is to estimate the rate of correct
recognition, while the second is to compare the variations in a population versus the variations in
measurements. Neither method is entirely satisfactory, but they both provide insights into
performance.
I –Brook (Volume 1, Issue 3) July – December 2017 |117
Data in gait recognition
The types of data used in gait and motion analysis systems.
(a) Background subtraction
Background subtraction is a method for identifying moving objects against a static
background. Although there are many variations on the theme, the basic idea is to
1. Estimate the pixel properties of the static background.
2. Subtract actual pixel values from the background estimates, and
3. Assume that if the differences exceeds a given threshold that the pixel must be part of a
moving object.
Normally one follows the last step by forming connected components, or blobs, of moving pixels that
corresponds to the moving objects. Factors that confound background subtractions include background
motion, moving objects that are similar in appearance to the background, background variations over long
periods of time, and objects in close proximity merging together. In general, the variations on the theme
of background subtractions involve selecting pixel properties to compare, background models, and
innovations to address any number of confounding factors.
Examples of background subtraction (a) original image, and (b) segmented image.
(b)Silhouettes
Background subtraction provides a set of pixels within the region of a moving object. Alternatively, one
may only be interested in the outline of that region. It is known as silhouette.
Conclusion
Interest in gait-based biometric has lead to a stream of recent results. Clearly, the performance of gait
recognition systems is below what is required for use in biometrics. When one considers that gait is best
I –Brook (Volume 1, Issue 3) July – December 2017 |118
suited to recognition or surveillance scenarios where the databases are likely to be very large, one would
expect high false alarm rates that will render a system useless. Furthermore, tests to date do not fully
consider variation in gait measurement over long time spans, and under with different imaging conditions.
Nevertheless, researches are making progress and understanding more about gait with each new
development. Areas that further investigation include studies on variability with terrain, footwear, long
time spans, and other confounding factors, in an effort to find gait features that vary only with the
individual.
I –Brook (Volume 1, Issue 3) July – December 2017 |119
―Hyper-threading: A New Era for Processor Speed-Up‖ Amitava Halder, Assistant Professor, CSE Dept., SKFGI.
Introduction: Hyper-threading or HT technology is simultaneous multithreading (SMT) implementation from the house
of Intel Corporation. The aim of this technology is to improve parallelization of computation tasks based
on ×86 architecture. SMT technique is used as it provides thread level parallelism. In SMT, instructions
from more than one thread can be executed in a pipelined fashion. As pipelining is one of the most
powerful and oldest techniques for concurrent processing so with little modification of hardware we can
implement SMT inside the microprocessor. HT technology is implemented with the concept of logical
processor. Each physical processor core is logically split into two logical processor and they can be
individually execute a specified thread, halted and even interrupted independently from the other logical
processor sharing the same physical core. So, each logical processor in a hyper-threaded core shares the
execution resources like execution engine, caches and system bus interface unit. Hyper-threading works
by duplicating certain sections of the processor, which is called architectural state.
History: [Ref:Wiki]
Denelcor, Inc. introduced multi-threading with the Heterogeneous Element Processor (HEP) in 1982. The
HEP pipeline could not hold multiple instructions that belong to the same process. Only one instruction
from a given process was allowed to be present in the pipeline at any point in time. Should an instruction
from a given process block in the pipe, instructions from the other processes would continue after the
pipeline drained.
US patent for the technology behind hyper-threading was granted to Kenneth Okin at Sun
Microsystems in November 1994. Back at the time, CMOS process technology was not advanced enough
to allow for a cost-effective implementation.
Intel implemented hyper-threading on an x86 architecture processor in 2002 with the Foster MP-
based Xeon. It was also included on the 3.06 GHz Northwood-based Pentium 4 in the same year, and then
remained as a feature in every Pentium 4 HT, Pentium 4 Extreme Edition and Pentium Extreme Edition
processor since. Previous generations of Intel's processors based on the Core microarchitecture do not
have Hyper-Threading, because the Core microarchitecture is a descendant of the P6 microarchitecture
used in iterations of Pentium since the Pentium Pro through the Pentium III and the Celeron (Covington,
Mendocino, Coppermine and Tualatin-based) and the Pentium II Xeon and Pentium III Xeon models.
Intel released the Nehalem (Core i7) in November 2008 in which hyper-threading made a return. The first
generation Nehalem contained four cores and effectively scaled eight threads. Since then, both two- and
six-core models have been released, scaling four and twelve threads respectively. Earlier Intel Atom cores
were in-order processors, sometimes with hyper-threading ability, for low power mobile PCs and low-
price desktop PCs. The Itanium 9300 launched with eight threads per processor (two threads per core)
through enhanced hyper-threading technology. The next model, the Itanium 9500 (Poulson), features a
12-wide issue architecture, with eight CPU cores with support for eight more virtual cores via hyper-
threading. The Intel Xeon 5500 server chips also utilize two-way hyper-threading.
Architectural Perspective: [Ref: Intel Corporation]
Hyper-Threading Technology does not deliver multiprocessor scaling. Typically, applications make use
of about 35 percent of the internal processor execution resources. The idea behind Hyper-Threading
Technology is to enable better processor usage and to achieve about 50 percent utilization of resources.
I –Brook (Volume 1, Issue 3) July – December 2017 |120
A processor with Hyper-Threading Technology may provide a performance gain of 30 percent when
executing multi-threaded operating system and application code over that of a comparable Intel
architecture processor without Hyper-Threading Technology. When placed in a multiprocessor-based
system, the increase in computing power generally scales linearly as the number of physical processors in
a system is increased; although as in any multiprocessor system, the scalability of performance is highly
dependent on the nature of the application.
Each logical processor
Has its own architecture state
Executes its own code stream concurrently
Can be interrupted and halted independently
The two logical processors share the same
Execution engine and the caches
Firmware and system bus interface
Hyper-threading works by duplicating certain sections of the processor, which is called architectural state.
It is the part of CPU which holds the state of a processor and includes the following registers [Source:
Wiki]
Control registers:
Instruction Flag Registers
Interrupt Mask Registers
Memory management unit Registers
Status registers
General purpose registers (GPUs):
Adder Registers
Address Registers
Counter Registers
Index Register
Architecture of processor with Hyper-Threading Technology
I –Brook (Volume 1, Issue 3) July – December 2017 |121
Stack Registers
String Registers
So, it is seen that the above mention resources can be shared among each logical processor
On a non hyper-threaded core, only one thread can be executed on a core at a given time, but in Hyper-
threading multiple threads can be executed simultaneously on one processor core and a 4-core hyper-
threaded processor looks like 8 cores to the system, and in general it‘s multiply by a factor of two, e.g.
An 8 core CUP can handle 16 streams, a 4core can handle 8 streams, and a dual -core can handle
4 streams. It enables us a single CPU, to be appear as multiple CPU.
It is seen from the above figure that how the architectural state is shared in a single-core, Multi-processor
and Multi-Core environment.
Here are the few lists of latest processors which support Hyper-threading
A simple comparison of Single-Core, Multi-Processor, and Multi-Core Architectures
Model Core i3
Core i5
Core i7
Number of cores
2 2/4 4/6/8/10
Hyper-threading Yes Applicable for
Dual Core Yes
I –Brook (Volume 1, Issue 3) July – December 2017 |122
It is noted that Intel-Corei5 handling four streams by either using four real cores or two cores with Hyper-
Threading but Core i3 and Core i7 both are fully hyper-threaded processor
Logical vs. Physical Processors:
Programmers need to know which logical processors share the same physical processor for the purposes
of load balancing and application licensing strategy.
The following sections tell how to:
Detect a Hyper-Threading Technology-enabled processor.
Identify the number of logical processors per physical processor package.
Associate logical processors with the individual physical processors.
Note that all physical processors present on the platform must support the same number of logical
processors.
The cpuid instruction is used to perform these tasks. It is not necessary to make a separate call to the
cpuid instruction for each task.
Each logical processor has a unique APIC identification (ID). The APIC ID is initially assigned by the
hardware at system reset and can be reprogrammed later by the BIOS or the operating system. The cpuid
instruction also provides the initial APIC ID for a logical processor prior to any changes by the BIOS or
operating system.
An example of the APIC ID numbers using a Hyper-Threading Technology
Logical Processor-1
Logical Processor-0
00000001
[APIC ID]
00000000
[APIC ID]
Physical Processor-0
Logical Processor-0
Logical Processor-1
00000111
[APIC ID]
00000110
[APIC ID]
Physical Processor-1
I –Brook (Volume 1, Issue 3) July – December 2017 |123
The initial APIC ID is composed of the physical processor‘s ID and the logical processor‘s ID within the
physical processor.
The least significant bits of the APIC ID are used to identify the logical processor within a given physical
processor. The number of logical processors per physical processor package determines the number of
least significant bits needed.
The most significant bits identify the physical processor ID.
Note that APIC ID numbers are not necessarily consecutive numbers starting from 0.
In addition to non-consecutive initial APIC ID numbers, the operating-system processor ID numbers are
also not guaranteed to be consecutive in value.
Initial APIC ID helps software sort out the relationship between logical processors and physical
processors.
Check Your System for hyper-threading:
To determine if your Windows system is using hyper-threading, you will be able to with access to the
command line. Windows Management Instrumentation (WMI) is a management infrastructure that
provides access to control over a system. This control provides an API to assist in the system‘s
management. wmic is a command line interface to WMI.
With the command line open, you can type:
wmic
to enter the interactive wmic interface.
Then, you can type:
CPU Get NumberOfCores,NumberOfLogicalProcessors /Format:List
to view the amount of physical and logical processors.
The output will be something such as:
NumberOfCores=2
NumberOfLogicalProcessors=2
This shows that hyper-threading is not being used by the system. The amount of (physical) cores will not
be the same as the number of logical processors. If the number of logical processors is greater than
physical processors (cores), then hyper-threading is enabled.
Conclusions:
Hyper-Threading is not same as the multi core processor or doubling the number of cores in a processor.
It can enhance the CPU performance by at most 30%, actually Hyper-Threading is the alternative
I –Brook (Volume 1, Issue 3) July – December 2017 |124
technique which use concurrent processing by simultaneous multithreading and it is always better to use
Hyper-Threading instead of increasing CPU clock frequency or main memory capacity. Hyper-Threading
is a cool feature, and it‘s worth having. It‘s particularly good if someone like to edit media often or use
computer as a workstation for professional program like Photoshop or Maya.
I –Brook (Volume 1, Issue 3) July – December 2017 |125
Departmental Achievements
Toppers shining bright this year!!
Rank NAME
YGPA
1 NAMAN AGARWAL 9.13
2 SAMPITA NEOGI 8.73
3 NAVIN GUPTA 8.2
3 AKASH SHARMA 8.2
Rank NAME
YGPA
1 INDRAJIT MONDAL 8.89
1 RAKIBULLAH
SARKAR 8.89
2 SOUVIK MONDAL 8.64
3 SHALU KUMARI
YADAV 8.45
Rank NAME
YGPA
1 SRESTHA SADHU 9.12
2 PUJA MISHRA 8.85
3 SUKANYA BOSE 8.75
Rank NAME
DGPA
1 SOUMILI RAKSHIT 9.05
2 SHEELA SINGH 8.94
3 TIRNA ROY 8.89
PART – B
First Year
Third
Year
Second
Year
Fourth
Year
I –Brook (Volume 1, Issue 3) July – December 2017 |126
STORIES THROUGH PHOTOS!!
Poster and Slogan competition celebrating
"Vigilance Awareness Week" at the college
campus
Padma Shri awardee Dr. Deepak B. Phatak
interacting with the students for "College to
Corporate" program organised by IIT, Bombay
Dr. Rajib Bag, HOD, CSE felicitating Trisha Dey,
(2012-2016) , who coined "I-Brook" for the
departmental magazine
I –Brook (Volume 1, Issue 3) July – December 2017 |127
Supreme Knowledge Foundation Group Of Institutions 1, Khan Road, Mankundu, Hooghly – 712139.
Approved by AICTE, Affiliated to MAKAUT (formerly known as WBUT), Affiliated to WBSCT&VE&SD and Recognised by UGC