Arm Report

57
IMPLEMENTING VOICE CONTROLLED OPERATION OF PERIPHERALS (INTERACTIVE VOICE RESPONSE) Ashwin Raman Ramnarayan Krishnamurthy

description

road sign recognition

Transcript of Arm Report

Page 1: Arm Report

IMPLEMENTING VOICE CONTROLLED OPERATION OF PERIPHERALS

(INTERACTIVE VOICE RESPONSE)

Ashwin Raman

Ramnarayan Krishnamurthy

Final Project Report

ECEN 5613 Embedded System Design

May 02, 2015

Page 2: Arm Report

Table of Contents1. INTRODUCTION...................................................................................................................................3

2. TECHNICAL DESCRIPTION....................................................................................................................4

a. BOARD DESIGN...............................................................................................................................4

b. NEW HARDWARE COMPONENTS...............................................................................................6

(i) Analog Circuitry...................................................................................................................................7

(ii) CHOICE OF MICROPROCESSOR......................................................................................................10

(iii) GRAPHICAL LCD.............................................................................................................................10

(iv) INTERFACING THE MOTOR............................................................................................................11

3. SOFTWARE DESIGN...........................................................................................................................13

4. FIRMWARE DESIGN...........................................................................................................................17

5. TESTING PROCESS.........................................................................................................................20

6. RESULTS AND ERROR ANALYSIS.......................................................................................................23

7. CONCLUSION.....................................................................................................................................31

8. FUTURE DEVELOPMENT IDEAS.........................................................................................................32

9. ACKNOWLEDGEMENT.......................................................................................................................33

10. REFERENCES..................................................................................................................................33

11.APPENDIX.............................................................................................................................................35

APPENDIX 1 - BILL OF MATERIAL..............................................................................................................35

APPENDIX 2 – SCHEMATICS:.....................................................................................................................36

Page 3: Arm Report

..................................................................................................................................................................37

APPENDIX 3 - SOURCE CODES:..................................................................................................................38

Page 4: Arm Report

1. INTRODUCTION

Speech is the single most powerful utility that can be used for almost any purpose. The ability to convert a given speech input to text is the crux of a majority of technology developments in the recent past. The Google Voice, Siri, Cortana are implementations of speech to text converters that have made it big in the smartphone world. In addition to converting speech to text, several additional features such as performing a task can be integrated to the system that aggrandizes the need of such a converter. One such derivative is the Interactive Voice Response System that performs a task corresponding to the user’s voice input.

An Interactive Voice Response in its most basic form consists of three stages: Speech gathering, Feature extraction and a visually representable response.

Page 5: Arm Report

2. TECHNICAL DESCRIPTION

a. BOARD DESIGN

The physical implementation of the interactive voice response system is broadly divided into the 3 aforementioned stages. Each stage is then integrated after ensuring its optimal functionality.

We purchased a general purpose PCB with the idea of starting from scratch. We drilled suitable holes to place the RS 232, the power cable and the board holder. The first stage of the system is installing an analog microphone followed by a noise filter and amplifier circuit. The idea was that a speaker could speak into the microphone and the preprocessing circuit (amplifier and low pass filter) would sift the unnecessary noise. The analog input would then be sent to a biasing circuit which essentially systematically deviates a signal by a reference value, in this case to avoid negative amplitudes as the Analog to Digital Converter accepts only positive values as input.

The Second Stage is to periodically collect and store the inputs received from the ADC and process them in such a way that we can extract features that facilitates comparison of different words.

Here, we propose the use of the DFT/FFT which transforms the time domain signal to a frequency domain signal. We then isolate the maximum frequency bins of each waveform corresponding to a word and use it as a signature to identify it. There is a decision making system in place which determines the word depending on the maximum bin.

The Third and final stage is the use of the recognized sample to perform a task. We have interfaced a Graphical LCD, a Motor and a few LEDs of various colors. The LCD doubles up as an Equalizer as well as a simple display system. The operation of the motor is controlled by the user input and a LED corresponding to the input (color) lights up.

Page 6: Arm Report
Page 7: Arm Report

The Workflow can be succinctly represented as follows:

b. NEW HARDWARE COMPONENTS

The basic hardware setup included elements from the lab assignments such as getting the power circuit in place. The power circuit consists of a diode rectifier bridge, a voltage regulator and a diode that indicates the status of the circuit. Also, the setting up of UART using the MAX 232 as well soldering the RS 232. The new hardware elements installed in stage 1 were essentially the analog circuitry to sift noise, amplify the signal and bias the analog signal.We faced a few issues when trying to interface an analog microphone. They were the requirement of an adhoc circuitry that provides a minimum gain of 75 as well as a high impedance. In addition, the lab environment was not suitable for testing owing to the high amounts of noise as well as its variability. Hence, we decided to proceed by using an audio jack and pre-recorded sounds instead of the microphone.

Interfacing the audio jack was pretty simple in that it has just 3 pins. The analog circuitry to filter out noise was complex in that we had to choose between passive or active components. We observed that the analog input through the jack was at most 2.13 V in amplitude.

Voice Input / Microphone

Second Order Active

Low Pass Filter

Biasing Circuit

12 bit Analog

to Digital Conver

ter

256 bin FFT

Module

Pattern Detecti

on

Maximum Bin and

Index Identific

ation

Graphic Equalizer

DC Motor

Operation

Multi Colored

LED Control

Graphic LCD

LED

MOTOR

Page 8: Arm Report

(i) Analog CircuitryFIRST ORDER PASSIVE (RC) LOW PASS FILTER

The Vin is the analog signal from the audio jack played on the computer, while the output will be sent to the biasing circuit. The essence of the circuit lies in the fact that the Voltage drop across the capacitance is much higher than that across the resistor at low frequencies because the Capacitive Reactance is given by Xc= ½*pi*f*c. At significantly higher frequencies (3.4 KHz) the drops are comparable which leads to a attenuation effect on Vout.

The Cut off Frequency calculations stems from the following equation:

Fc=1/2*pi*R*C

While the phase shift angle is =-arctan(2*pi*f*R*C)

Voice Frequency is generally band limited between 0 and 3.4 KHz, while we can hear frequencies between 20 Hz and 20000Hz. Our observation after looking at several music samples was that majority of the amplitudes were within the 4.5 KHz and if we apply the Nyquist Sampling Theorem, we are assured that if we sample around 9-10 KHz, we will not miss any of the samples. Hence, assuming we are working with a Cut off Frequency of 10 KHz, and fixing C as 1 uF, we can compute R to be 15.5 ohms. The use of a Passive First Order Low pass Filter (frequency limited at 10 KHz) had an attenuating effect on its amplitude by almost 43 percent though it did restrict the noise to around 210 mV.

The lowered amplitude was a cause of concern as a reduced amplitude corresponds directly to a reduced resolution which would definitely affect our Max Bin Calculations in the FFT. A higher resolution would guarantee a better signature for each word and hence, improve our accuracy and database.

We tried implementing a Second Order Active low Pass Filter (F cut off = 10Khz) using the Op Amp OP217.

Page 9: Arm Report

Here, we connect the audio signal as a non-inverting input of the operational amplifier. As we are interested in a gain of around 1, we choose the ratio of Ra/Rb to be a fractional number.

So as a part of our design choice, we allot a value of 22 Kohm to Rb and 22 ohms to Ra.

Now, as our sampling (cut off) frequency is 10 Khz, we can choose R1 and R2 as 15 ohms, while we again select C1 and C2 as 1 micro Farad.

The graph represents the added advantage of using a second order RC filter. The roll off per decade is doubled and hence, the drop in amplitude after reaching the cut off frequency is much steeper. In addition to having a better roll of factor, the advantage was that we were able to have a gain of 1 to undo the effect of attenuation by the R-C Circuit. We noticed that the noise was significantly lower and was around 180 mV. The difference was not as pronounced as what it would have been had we implemented an analog microphone. The use of the audio jack and a prerecorded voice was a critical choice as it significantly reduced interference and aberrations due to noise.

Page 10: Arm Report

Implementation Cut off Frequency

Noise Amplitude P-P (mV)

Maximum Voltage of a certain input (YES)

First Order Passive Low Pass

10 KHz 210 1.34 V

Second Order Passive Low Pass

10 KHz 195 1.08 V

First Order Active Low Pass (Gain =1.001)

10 KHz 200 2.11 V

Second order Active Low pass (Gain = 1.001)

10 KHz 180 2.03 V

BIASING CIRCUIT

The ADC present on the STM32F4 Discovery Board requires a positive analog input. But, the analog input from the audio jack has an equal representation in the negative amplitude domain, hence, we need to design a biasing circuit such that the minimum value of the amplitude shifted signal is above the x axis. The idea here is that now the Vref will be Vin/2, which in this case is 3/2=1.5 V. Hence, the lower bound of the represented wave is 1.5 V and highest value in our case is 2.13+1.5= 3.63V.

Though this is the simplest Biasing circuit, the advantage of using the audio jack is prominent because, if we have noise in the supply lines, the noise gets added in this design. As we have limited noise in our audio signal through the audio jack, this circuit functions just fine. In the case that we interface an analog microphone, the second circuit would take care of the shortcomings of the first.

Page 11: Arm Report

(ii) CHOICE OF MICROPROCESSOR

Having decided that we will be working on ARM rather than the 8051 primarily because of the fact that implementing a DFT on 8051 would take over 15 minutes over 3 windows, while in the ARM M4 it would barely take 2 seconds for 10 windows. Even if this were acceptable, if we had to store and train data and run a complex algorithm like the Hidden Markov Model, the time complexity grow exponentially making 8051 an extremely poor choice.In addition, we were looking for an ARM cortex that had an in built ADC as well as provided DSP support i.e good number of inbuilt libraries for Signal Processing Functions. The Most commonly used microprocessor that satisfied these requirements was the ARM 32 bit Cortex M4.We procured the STM32F0407VG and the following are the feature sets as well our proposed utility for them:

1) Operating Frequency upto 168 MHz2) Floating Point Unit - Single Precision – Better DSP performance3) Flash Memory – upto 1 MB and 192 KB of SRAM4) 12 bit ADCs – Used to digitize the analog input signal5) DACs – proposed use to re transmit the digitized single back to the audio jack and played on the

computer6) 3 I2C7) 3 SPI – Could be used to interface the Character LCD8) UART – To debug any issues with the FFT output9) SDIO – To interface a SD Card to store signature waveforms for a wider database10) PWM – to control the DC Motor

(iii) GRAPHICAL LCDHaving worked on the character LCD as a part of LAB 4, we decided to apply our learning in integrating a Graphical LCD. In addition, representing a variety of sinusoids would be much easier and better represented on a GLCD.

For this purpose, we procured a GDM12864HLCM, which is a framed graphical LCD 64 x128 with LED backlight. It utilizes the commonly used KS0108B parallel interface chipset which makes writing a driver easier. The GLCD is controlled using 2 such KS0108B parallel drivers; one for each half. The parallel interface comprises of 8 data lines, 2 chip selects, 1 register select line, 1 R/W line and the enable line.

The following are the basic operations implemented on the GLCD:

1) Initialization2) Page Selection3) Column Selection4) Data Display / Custom Character Generation

Page 12: Arm Report

Initialization is the process of switching “on” the display and enable column/page selection. This is achieved by sending a pre decided set of commands to the registers and the DDRAM of the GLCD.

As the GLCD in use has a total of 8 pages, we need to address the correct page while displaying our custom characters. Selection of the page can be achieved by working with the last 3 bits of the Data Register (X1, X2, X3).

The GLCD has 64 rows and 128 columns i.e 64 columns on each half and the pointer is automatically incremented after each write cycle.

Writing data onto the GLCD follows the same principle as the Character LCD case. So, we write values into the data register and a high data bit (DBx=1) corresponds to a Dark pixel and a low data bit corresponds to a Light pixel. Automatic rolling over needs to be implemented.

(iv) INTERFACING THE MOTOR

Our implementation included a dedicated mode for voice controller operation of the motor. Simple commands like START and STOP resulting in corresponding actions.We used a Brushless DC 5 V Fan specifically because of its soft start property.

To begin with we procured a 3V DC Motor but had difficulties integrating it with the driver circuit because of its hard start property which was not explicitly specified. The DC motor required 250 mA of Current to start and then ran at a constant 80 mA.

On the other hand the Brushless fan required a constant 100 mA of current while operating. The former resulted in current spikes which triggered the push button we had in place to switch between modes. We were able to isolate this peculiar occurrence by hooking the output to an oscilloscope.

The most important consideration in interfacing the motor with our system was the current requirements. The STM32F4 cannot source 100 mA of current for the optimal operation of the motor and doing so could damage the microcontroller.

Hence, we considered using the H bridge (Motor Driver ICs) or a simple Motor Driver circuit to complete the integration. As we were interested in just starting and stopping the motor, we were ambivalent to the direction of rotation and did not implement forward (clockwise) and backward (anticlockwise) modes of rotation.

Hence, we did not require an H bridge and a simple motor driver circuit suffices.

Page 13: Arm Report

In this case, the Transistor behaves like a switch, so when the input signal is 0 it is in the OFF state and hence the Motor is also off. On the other hand, if we supply 5 V at the input, the transistor is switched on and so is the motor.

Page 14: Arm Report

3. SOFTWARE DESIGN

ANALOG TO DIGITAL CONVERTER

Assuming that we are sampling at 22,050 Hz, it implies that we sample 22050 points of the audio input per second. Thus the distance between each such sampled point is 1/22050 or 45.35 us. Now, if we sample at double this rate, assuming that we want to sample every possible sound we can hear (until 20 KHz), then the distance now becomes 22.675 us.

In the ADC, we are trying to find a map between the Voltages and digitized numbers, and it is in this conversion that the importance of the resolution plays a part. A 12 bit resolution implies we are subdividing the voltages into 2^12 = 4096 points, each separated by 22.675 us. Essentially, now the output will have its y axis separated into 4096 points. It is now obvious that higher the resolution, smaller is the difference between 2 given voltages and this directly translates to a better accuracy. If we had a lower resolution, then 2 points close to each other will be allotted the same digital value and hence, we lose out on accuracy. So in our case, we are subdividing our voltage of 3.3 V into 4096 components = 0.808 mV.Also, another important parameter to consider is the Signal to Noise Ratio, given by the formula:6.02n +1.78 dB. So for the given 12 bit ADC, we have SNR = 74.02 dB which virtually a no noise value.Sample Calculation on Interpretation of the ADC output:

Assume that we get a 2 V on the Oscilloscope at a certain point on the input wave and we wish to map it to an ADC output Value. Using the simple rules of translation, we get 4096/3 = x/2x=2730, which must be the output value of the ADC.

FAST FOURIER TRANSFORM ALGORITHM

Page 15: Arm Report

The Fourier Transform’s ability to represent time domain data in the frequency domain is the primary motivation behind several feature extraction and detection of speech signals. One of the most frequent applications is analyzing the spectral (frequency) energy contained in data that has been sampled at evenly-spaced time intervals.The image above is a representation of the red signal in the frequency domain (blue).A DFT is equally plausible but one of the chief concerns is the running time complexity. When we are working with a very limited number of samples and database, the difference in performance is not particularly conspicuous.Most known DFT algorithms run with O(n2) complexity while FFT algorithms run at O(n logn). In addition, FFT is essentially the DFT over sampled waveforms defined over finite intervals and this attribute makes it more efficient.

An important attribute of the FFT like the ADC is its resolution and in case of FFT we represent it in BINS.

Bins define the confidence interval with which we can differentiate 2 closely placed frequencies and represent them with distinct amplitudes.So if we consider a 256 bin FFT, we are taking 256 samples in our FFT calculations. So given that we are sampling at 44100 KHz, we get a resolution of 44100/256 which is around 177 Hz. Hence, any frequency below this cannot be differentiated.Also, the output array in which we are displaying the amplitudes corresponding to certain frequencies, has a length of 256 but only the first 128 elements suffice as the next 128 are essentially the reverse of the first 128.

Page 16: Arm Report

The maximal frequency we can analyze is always for one resolution less than half the sampling frequency.The Implementation of FFT utilizes the Cooley- Tukey Algorithm which re- expresses the DFT of an arbitrary composite size N=N1 N2 in terms of smaller DFTs of sizes N1 and N2 recursively. Essentially, the algorithm applies the Divide and Conquer ideology by sub dividing a single problem into multiple solvable sub problems.

The pseudo code describing the Cooley Tukey Algorithm is :

X0,...,N−1 ← ditfft2(x, N, s): X0 ← x0 else X0,...,N/2−1 ← ditfft2(x, N/2, 2s) XN/2,...,N−1 ← ditfft2(x+s, N/2, 2s) for k = 0 to N/2−1 t ← Xk Xk ← t + exp(−2πi k/N) Xk+N/2 Xk+N/2 ← t − exp(−2πi k/N) Xk+N/2 endfor endif

In principle, we are looking for signatures for each waveform so that we can differentiate one from another. Such signatures need to be highly localized to be applicable to a much larger data set. One such signatures would be the index of the maximum amplitude bin. Hence, we store

Page 17: Arm Report

both the index of the bin as well the value of amplitude corresponding to that bin, which is the highest.The crucial problem here is the number of required samples to avoid wastage of the bandwidth.We ascertained that the number of such required samples never exceeded 15 in most cases when we ran a list of frequently used words through our FFT algorithm.

The next step was in identifying a pattern between both the bin indices and the maximum amplitudes and taking advantage of the fact that the pattern must be unique for different word groups.

Here is a sample expected FFT output waveforms for certain generic waveforms.

Page 18: Arm Report

4. FIRMWARE DESIGN

Firmware codes were written for 1) USART (universal serial asynchronous Receive and Transmit)2) Graphical LCD3) Analog to Digital Converter4) Interfacing PB0 as an External Interrupt input pin

We have discussed the firmware design of Graphical LCD and Analog to Digital Converter (ADC) in depth below

Graphical LCDPE0 – PE7 have are connected to DB0-DB7 on the graphical LCD. Data is transferred to the DDRAM by parallel bus interfacing.PA0-PA5 have are used to control other pins on our LCD

1) PA0 forCS2; (Chip Select2)2) PA1 for CS1; (Chip Select 1)3) PA2 for /RST; (Reset)4) PA3 for R/W; (Read/Write)5) PA4 for D/I; (Register Select)6) PA5 for E (Enable)

The graphical LCD firmware has four primary functions :

a) void GLCD_Data(unsigned char);configuration : RS= 1 and R/W=0This function is called to fill pixel values corresponding to the 1s in the binary of the data being written to a page. Eg. If data being written is 8 (binary = 00001000) then the 4 th pixel of a selected page and column is darkened.

b) void GLCD_Comd(unsigned char);configuration : RS=0 and R/W=0This function is called to give a command to Graphic LCD (i.e. to select a given page/column/select chip etc..)

c) void GLCD_Init();This function is called to initialize the LCD.The following four commands are sent to the graphical LCD.

I. 0xC0 //Starting address of the DDRAM on the Graphical LCD II. 0xB8 // Starting DDRAM address of the Page selection

III. 0x40 // Starting DDRAM address for column selectionIV. 0x3F // Command to initialize all pixels to 1. (i.e all 128x64 pixels are set)

Page 19: Arm Report

The graphical LCD has 2 chips. Each chip has DDRAM addresses for 64 columns on the LCD. Chip1 is selected to write to the first 64 columns (0-63) and Chip2 is selected to write to the last 64 columns (64-127).

- There are 8 pages on graphical LCD. Each page has 8 rows (i.e Page 0 constitutes row0-7, Page1 constitutes row 8-15 and so on). The starting address of the pages is 0xB8. Hence, if data has to be written to a certain page (0<=page<8) then the command for the same would be GLCD_Comd( 0xB8|page)

- There are 128 columns on the graphical LCD as described above. The starting address of the columns on the DDRAM is 0x40. For writing to a certain column on the graphical LCD, the command for the same would be GLCD_Comd(0x40 | column)

d) void DelayMs(int);- This function provides sufficient delay between initialization statements depending on the

argument that is given.

e) void GLCD_drawline(unsigned int column, unsigned char A, unsigned char k) This function is called to draw a single line on the graphical LCD. The length of the line corresponds to the amplitude of a certain frequency bin as obtained from the FFT algorithm.

The FFT algorithm will give us 128 frequency bin values with each bin having a certain amplitude measurement. Each bin value thus occupies one column on our graphical LCD (128 bin values from FFT algorithm and 128 columns on graphical LCD).

The amplitudes obtained for each frequency bin is within the range (0-70). Each amplitude value was amplified by a factor of 50 (this amplification factor can be modified to cover the length of the graphical LCD appropriately. We chose 50)If the frequency bin value from which we are reading is ‘n’ (where 0<= n <128), then the line corresponding to the amplitude is drawn in column ‘n’. Each pixel on a single line corresponds to a different row. We start printing from row 0 of Page0 on column ‘n’. After every 8 rows, we have to switch over to the next page.

- Let us assume our amplitude value of a certain frequency bin is “x”. - After Amplification, X = 50x- We then express X is the form of a function such that X = 256*k + A- Since each page has 8 rows, printing 1s on all 8 pixels of a certain column would correspond to

a value = 256 (since 28 = 256). Thus the number of pages (within a certain column) we fill entirely X/256.

- The remainder of X/256 (i.e. A) will correspond to the number of pixels that have to be darkened in the last page. If, 0<A<32 then 1 pixel is darkened in the last page; If 32<A<64 then 2 pixels are darkened and so on..

Page 20: Arm Report

f) void GLCD_character_write(unsigned char disp)We generated custom character for every alphabet. A combination of these characters was used to print the word corresponding to the audio input.

The custom character codes were written in file font.h. We could not make use of the predefined characters in font.h as we were using a different alignment of our graphical LCD (The predefined characters in font.h were aligned with page 7 as reference 0. We used page 0 as our reference 0. On using the predefined characters, they were being printed upside down and words were being printed left to right)

ADC (Analog to Digital Converter)

We are using the ADC 1 on our STM32F4 discovery board.

1) Our system clock works at 8Mhz. According to the datasheet, the ADC clock should be in the range 600kHz to 14MHz. The clock for the ADC is provided by a pre-scaler fed from PCLK2, the APB2 clock. - RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE);

2) We have used PC1 as an Analog input pin for our ADC. For this, GPIO Pin PC1 has been configured appropriately in the function

- void GPIO_Configuration_adc(void)

3) After GPIO Initialization, we have configured our ADC1 by calling the function - void ADC_Configuration(void).

4) ADC1 is configured to 12 bit ADC which runs in continuous conversion mode (i.e ADC1 will not wait for a trigger to start or stop conversion). We have then used a pre-scaler to provide a clock to the ADC. Our pre-scaler value is 4, thus our clock to the ADC is 8MHz/4.

- ADC_CommonInitStructure.ADC_Prescaler = ADC_Prescaler_Div4;

5) To “bind” ADC with PC1 the following function is called - ADC_RegularChannelConfig(ADC1, ADC_Channel_11, 1, ADC_SampleTime_15Cycles);

Our ADC sample time is 15 cycles i.e. our ADC will sample the analog signal every 15 ADC clock cycles. After 15 clock cycles, ADC will take another 1.5 cycles to convert the captured values to a Digital value. Thus total conversion time is 16.5~17 ADC clock cycles.

6) Finally, to start ADC conversion we call the following function- ADC_SoftwareStartConv(ADC1); //Start the conversion

Page 21: Arm Report

5. TESTING PROCESS

Our first step in testing the various parts was to check for individual unit testing.

Having powered on the discovery board, we checked to see whether we were getting the 3 V and 5 V outputs. Also, we programmed a sample LED code from the website to confirm that the board had no issues.

The next step was to confirm that the UART we designed was functioning without issues. For this, we again wrote a simple program similar to the one from LAB2 to check if a single character is being transmitted successfully.

We played a certain sound through the Google Dictionary to have a constant voice removing elements of accent, differing frequencies, pitch and timbre.

Then we tried transmitting the output of the ADC onto the screen to check if we were getting the expected values. The check was a simple translation formula described above. Having mapped the voltages to digital value, we ran the FFT algorithm to check if we were getting amplitude entries in bin values.

Parallelly, we ran a version of the FFT algorithm on Matlab to generate time domain as well as frequency domain maps of certain words. Now, the next obvious step was to map the maximum amplitude and bin indices with the values we got on MATLAB. Once, we ascertained that our FFT algorithm was functioning optimally without losing/ dropping frames, we worked on solving the pattern to find unique signatures.

Finding a pattern had a lot of dependencies because the bin values varied with the laptop on which the sound was played, also it varied with changes to the firmware that we were developing parallelly due to the additional delays.

Page 22: Arm Report

Hence, our next approach was to interface components like the motor, graphical LCD and the LEDs and confirm their functionality individually before integrating them with the whole system.

A simple GPIO toggle was enough to check the functionality of the several colored LEDs, while we designed a Motor driver circuit to avoid excessive current in the circuit and checked its functionality on a breadboard.

The graphical LCD was a much tougher proposition and we faced several issues while initializing it. Also, addressing pages and writing strings was much harder because it worked in a completely different way to the character LCD we had interfaced as a part of Lab 4. But once we were able to define each string as a congregation of custom characters and print elements on the screen, we found it equally simple to display the FFT waveform as well.

The next step was the integration of the various components, i.e the complete chain in which a certain sound say “START” was played which was send through the noise filter circuit, to the bias circuit onto the 12 Point ADC and then the FFT Algorithm runs its magic. Once the FFT algorithm returns the bin indices we run the pattern detection algorithm that determines the

Page 23: Arm Report

identity of the word “START”. Once the detection is processed, a certain task such as “START” of the motor or the blinking of an LED occurs.

We initially implemented this system, with just 2 words i.e YES and NO. But later added 10 more words to the dictionary to allow for voice operation of a variety of components.

The major problems arose in integrating between different modes of operation as multiple words had similar FFT profiles.

Page 24: Arm Report

6. RESULTS AND ERROR ANALYSIS

The FFT implementation provided a very robust and accurate system for isolating maximum amplitude bins and in differentiating words.

The performance, quantitatively was a perfect 100 percent when applied on the basic 2 words, YES and NO, primarily because of mutually exclusive FFT waveforms.

The effectiveness of this method measured on a dictionary of the following 9 words is depicted as a table.

WORD NUMBER OF SUCCESSFUL IDENTIFICATIONSSTART 96STOP 94RED 84BLUE 98ORANGE 91GREEN 93CIRCULATE 92LOGIN 95REGISTER 94

TOTAL ACCURACY 93 %

The accuracy obtained here is subject to the following constraints:

1) Constant background2) Same voice (Same person speaking at a constant pitch without any added accent)3) Constant speed of enunciation 4) Fixed Vocabulary (Database of words)5) Constant Background (Noise and Disturbance)

We can be sure that 93 of the 100 times, we will identify the correct word. In addition, unless a word is a part of the dictionary, a new word is never identified. Hence, a new word is never recognized successfully and do this we need to train the system with a test data set.

Also, a change in the volume of the input has barely any implications on the probability of a successful identification. Hence, the algorithm ensures a volume independent implementation of speech recognition.

Here are a few images that show the state of the system in different modes:

1) MODE 1 – GRAPHICAL EQUALIZER that represents the frequency transformed waveform of a song playing.

Page 25: Arm Report

The output corresponding to different types of inputs like the Sine Wave, Saw tooth and the Square Wave.

`

The FFT corresponding to a sinewave with a frequency of 6000 Hz

Page 26: Arm Report

The FFT profile corresponding to a triangular wave with a frequency of 6 KHz that shows the odd modes only.

The FFT profile of a Saw Tooth Wave centered at 3000 Hz. Both the odd and even components are visible.

Page 27: Arm Report

2) MODE II – Color Recognition and activation of the corresponding LED

Page 28: Arm Report
Page 29: Arm Report

3) MODE III – Motor Control: This mode is solely earmarked for the operation of the motor and its speed.

The Start Voice command starts the DC motor as visible in the image above.

Page 30: Arm Report

The Stop Voice Command stops the Motor as visible in the image.

In essence, the semi supervised algorithm is a functionally very stable system within its limits of operation.

Given the minimal leeway we provided for noise in the system, noise is not really a heavyweight when it comes to error analysis. On the other hand, the optimization function to recognize the pattern of each word was one of the major reasons for incorrect identifications.

When words have similar or adjacent maximum bins, then there is high probability that one word is confused for the other. Eliminating this error would require the implementation of a more sophisticated feature extraction algorithm rather than a simple FFT.

For example, a word like “Orange” and “Yes” have similar maximal bins and indices owing to the nature of stressing on key elements. In such a case, the optimization function returns an ambiguous range of values i.e overlapping ranges. Hence, distinct identification with a 100 percent accuracy gets harder.

Page 31: Arm Report

In addition, differentiating between words like “Nope” and “No” or “Yes” and “Yeah” requires a better feature extraction mechanism than the simple use of FFTs. Using a descriptive feature extracting algorithm may result in the need of using dimensionality reduction principles.

When a different user speaks a “recognized word” (one that exists in the database), the probability of successful identification is much lower as FFT is frame specific and doesn’t account for temporal similarities and interdependencies in frames. A simple addition of overlapping frames is good approach and is called the STFT (Short Time Fourier Transforms), but that again discounts the importance of pitch, tone, timbre and accent of the speaker.

7. CONCLUSION

The learning curve involved in understanding a complex architecture like ARM, was steep, yet, rewarding. The advantages of using the STM32F4 was pronounced especially when we had to implement the FFT and other optimization algorithms to recognize patterns in the signal.

Implementation of a Speech to text converter using the FFT as a feature extraction algorithm and an audio cable input simplified the complexity of the ongoing challenge of Speech recognition. Yet, we were able to demonstrate a very good accuracy for a moderately large vocabulary for words.

Additionally, we interfaced a few commonly used components like a Graphical LCD, a Motor and a few LEDs to exhibit the voice controlled operation of peripherals as well as add a touch of visual presence.

Page 32: Arm Report

8. FUTURE DEVELOPMENT IDEAS

The implementation in itself was complete, yet there is a room for a lot of changes on various fronts.

Most Speech recognition algorithms utilize FFTs for feature extraction; FFT is a means to an end and not sufficient to differentiate between every word in the English dictionary. In addition, it doesn’t consider the dependencies in consecutive frames and this reduces its efficiency drastically. The choice of the kernel (in our case, Square Wave) has a significant bearing on the quality of detection.

The following are a few ideas we would like to test in the near future:

1) Implement a more robust speech recognition algorithm, most likely the Hidden Markov Model. In addition, using a specific scheme for Feature extraction like the Mel Frequency Cepstrum Coefficients. This will definitely outperform the FFT as well provide a more robust system for speech detection as it takes into account the temporal correlations, timbre and pitch changes in voice.

2) Increase the vocabulary to span over all English words.3) We were working on integrating our project with that of a peer group who had implemented a face

detection scheme. The idea was to create an autonomous security system that provides both voice and face detection as a means of secure login. Hence, we added 2 more words to our database, LOGIN and REGISTER that would perform corresponding tasks in the face detection scheme developed by the other team.

4) Add an envelope detection algorithm to mimic the graphic equalizer functionality.5) Interface a MicroSD Card to the setup so that the waveforms for a variety of words can be stored in the

card and suitable parameters can be compared in real time to identify the word.6) Play the sound back after undoing the effect of biasing, through the DAC. 7) Interface an analog microphone rather than use the audio cable. Though this may require a better band

pass filter to isolate the noise and a bit of software coding, it will definitely make the system autonomous.

8) Speech to text converter can be really handy in classes as it “automates” the process of taking down notes. In addition, it is an extremely useful aid to people with disabilities. Also, it has a lot of commercial impact i.e making smartphones “hands free”.

Page 33: Arm Report

9. ACKNOWLEDGEMENTWe would like to sincerely thank Professor Linden McClure for his initial motivation and encouragement when we proposed this project.

We also thank our TAs, Aniket Kumar and Umang Patel, whose guidance and support throughout project helped us seamlessly shift from 8051 to ARM.

Last but not the least, a massive shout out to our peers who made this learning experience memorable and rewarding.

10. REFERENCES

Datasheets STM32F407VG (ARM Cortex M4)1) http://www.st.com/web/en/resource/technical/document/datasheet/DM00037051.pdf 2) Reference manual : http://www.st.com/web/en/resource/technical/document/reference_manual/

DM00031020.pdf

Introduction to STM32F407 and basic coding with Eclipse IDE1) https://www.youtube.com/watch?v=ATTahQlbBME 2) https://www.youtube.com/watch?v=mDxn-wdixoM

Speech Recognition Techniques: 1) http://www.cs.dartmouth.edu/~dwagn/aiproj/speech.html 2) http://en.wikipedia.org/wiki/Speech_recognition 3) http://dsp.stackexchange.com/questions/8508/finding-the-dominant-tone-in-a-signal 4) http://stackoverflow.com/questions/13593040/matlab-finding-dominant-frequencies-in-a-frame-of-

audio-data5) http://stackoverflow.com/questions/13593040/matlab-finding-dominant-frequencies-in-a-frame-of-

audio-data6) http://www.academia.edu/6799602/Dominant_frequency_extraction

Analog Circuitry for input from audio probe1) DC Biasing: http://electronics.stackexchange.com/questions/14404/dc-biasing-audio-signal2) DC Biasing circuit : https://www.google.com/search?

q=DC+bias+circuit&espv=2&biw=1366&bih=667&source=lnms&tbm=isch&sa=X&ei=byhDVdnKFcWXNvaVgPgB&ved=0CAYQ_AUoAQ#imgrc=rWeJguDyOO_9-M%253A%3BKC1oc1irkmGLlM%3Bhttp%253A%252F%252Fi.stack.imgur.com%252FSdcyk.png%3Bhttp%253A%252F%252Felectronics.stackexchange.com%252Fquestions%252F53325%252Fremoving-dc-wander-in-the-offset-of-a-dc-biased-audio-signal%3B954%3B529

3) Op-Amp (OP271) Datasheet : http://www.analog.com/media/en/technical-documentation/obsolete-data-sheets/OP271.pdf

4) 2nd order Low Pass Filter : http://www.electronics-tutorials.ws/filter/second-order-filters.html5) Low pass filter ciruit calculations : http://sim.okawa-denshi.jp/en/CRCRkeisan.htm

Page 34: Arm Report

Graphic LCDDatasheet

1) KS0108 controller: https://www.sparkfun.com/datasheets/LCD/GDM12864H.pdfLibraries for 8051 and Arduino

1) https://www.pantechsolutions.net/microcontroller-boards/glcd-interfacing-with-8051-primer 2) http://www.ramtex.dk/glcd/glcd0108.htm 3) http://playground.arduino.cc/Code/GLCDks0108 4) http://www.microchip.com/forums/m298788.aspx

USART 1) http://stm32f4-discovery.com/2014/04/library-04-connect-stm32f429-discovery-to-computer-with-

usart/2) http://eliaselectronics.com/stm32f4-usart-example-with-interrupt/

EXTERNAL INTERRUPT TO STM32F41) http://stm32f4-discovery.com/2014/08/stm32f4-external-interrupts-tutorial/ 2) http://www.keil.com/download/docs/352.asp 3) https://www.youtube.com/watch?v=aGbNXBPdcl0 4) my.st.com public forum

USING ADC ON STM32F41) Application Note (Datasheet) :

http://www.st.com/web/en/resource/technical/document/application_note/DM00050879.pdf2) http://stm32f4-discovery.com/2014/04/library-06-ad-converter-on-stm32f4xx/ 3) my.st.com public forum for ADC4) http://www.micromouseonline.com/2009/05/26/simple-adc-use-on-the-stm32/

256 point FFT Code implementation and RCC clock configurations1) http://stm32f4-discovery.com/2014/10/stm32f4-fft-example/

Page 35: Arm Report

11.APPENDIX

APPENDIX 1 - BILL OF MATERIAL

Part Description Source Cost ($)Perforated Board EE Store 8.00STM32F4 Discovery Board DigiKey 15.003.5 mm Stereo Cable 6 feet JB Saunders 3.003.5 mm Audio Jack x 2 JB Saunders 2.00Resistors, Capacitors and Power Circuitry Parts

ESD Lab 12.00

Graphical LCD Borrowed -DC 3V Motor McGuckins 4.00OP271 (Op Amp) EE Store 2.50UART Circuitry ESD Lab 3.00TOTAL 49.50

Page 36: Arm Report

APPENDIX 2 – SCHEMATICS:

Page 37: Arm Report
Page 38: Arm Report

APPENDIX 3 - SOURCE CODES:1)SOURCE CODE FOR GRAPHIC LCDFILENAME : lcd.h

/* lcd.h NAME: ASHWIN RAMAN, RAMNARAYAN KRISHNAMURTHY DESCRIPTION: This .h file is the driver code for the graphic LCD FUNCTIONALITIES:

1) GPIO Initialization for PINS on CortexM4 to control the Data Bus pins and Control Pins on GLCD

2) Functions for Command write and Data Write3) Function to Draw a line corresponding to the amplitude of a particular

frequency bin */

#include "adc.h"#include "font.h"

// Source : Looked up many driver functions for Graphic LCD interfacing with 8051 and used the ideas for for ARMvoid GLCD_Init();void GLCD_Data(unsigned char);void GLCD_Comd(unsigned char);void DelayMs(int);unsigned char mode = 1;

unsigned char c = 0;

#define CS2_ON GPIO_ResetBits(GPIOA,GPIO_Pin_0);#define CS2_OFF GPIO_SetBits(GPIOA,GPIO_Pin_0);

#define CS1_ON GPIO_ResetBits(GPIOA,GPIO_Pin_1)#define CS1_OFF GPIO_SetBits(GPIOA,GPIO_Pin_1);

#define RST0 GPIO_ResetBits(GPIOA,GPIO_Pin_2);#define RST1 GPIO_SetBits(GPIOA,GPIO_Pin_2);

#define RW0 GPIO_ResetBits(GPIOA,GPIO_Pin_3);#define RW1 GPIO_SetBits(GPIOA,GPIO_Pin_3);

#define RS0 GPIO_ResetBits(GPIOA,GPIO_Pin_4);#define RS1 GPIO_SetBits(GPIOA,GPIO_Pin_4);

#define EN0 GPIO_ResetBits(GPIOA,GPIO_Pin_5);#define EN1 GPIO_SetBits(GPIOA,GPIO_Pin_5);

Page 39: Arm Report

void LCD_GPIO_init() //GLCD Pins Initializations{

RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOE, ENABLE);RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOA, ENABLE);

// GPIO Initialization for DB0-DB7 pins on GLCD (PE0-PE7 used)GPIO_InitTypeDef lcd_gpio;lcd_gpio.GPIO_Pin= GPIO_Pin_0|GPIO_Pin_1|GPIO_Pin_2|GPIO_Pin_3|

GPIO_Pin_4|GPIO_Pin_5|GPIO_Pin_6|GPIO_Pin_7;lcd_gpio.GPIO_Mode=0x01;//01=Outputlcd_gpio.GPIO_Speed=0x03;//03=100Mhzlcd_gpio.GPIO_OType=0x00;//00=Push Pulllcd_gpio.GPIO_PuPd=0x00;//00=No Pull

GPIO_Init(GPIOE,&lcd_gpio);GPIO_PinLockConfig(GPIOE,GPIO_Pin_0);GPIO_PinLockConfig(GPIOE,GPIO_Pin_1);GPIO_PinLockConfig(GPIOE,GPIO_Pin_2);GPIO_PinLockConfig(GPIOE,GPIO_Pin_3);GPIO_PinLockConfig(GPIOE,GPIO_Pin_4);GPIO_PinLockConfig(GPIOE,GPIO_Pin_5);GPIO_PinLockConfig(GPIOE,GPIO_Pin_6);GPIO_PinLockConfig(GPIOE,GPIO_Pin_7);

// GPIO Initialization for the control pins on GLCD (PA0-PA5 used)lcd_gpio.GPIO_Pin= GPIO_Pin_0|GPIO_Pin_1|GPIO_Pin_2|GPIO_Pin_3|

GPIO_Pin_4|GPIO_Pin_5;GPIO_Init(GPIOA,&lcd_gpio);

// GPIO_PinLockConfig(GPIOB,GPIO_Pin_12);GPIO_PinLockConfig(GPIOA,GPIO_Pin_0);GPIO_PinLockConfig(GPIOA,GPIO_Pin_1);GPIO_PinLockConfig(GPIOA,GPIO_Pin_2);GPIO_PinLockConfig(GPIOA,GPIO_Pin_3);GPIO_PinLockConfig(GPIOA,GPIO_Pin_4);GPIO_PinLockConfig(GPIOA,GPIO_Pin_5);

}

int i;

void LCD_Delay(unsigned int x){

unsigned int i;for(i=0;i<x;i++)

asm("nop"); //1 NOP corresponds to}

void LCD_EN(){

EN0;LCD_Delay(100);EN1;LCD_Delay(100);

}

Page 40: Arm Report

void GLCD_Comd(unsigned char cmnd){ GPIO_Write(GPIOE,cmnd); RS0; //make it RS to Low RW0; //make it RW to low EN1; //enbale high DelayMs(10); EN0; //enable low}

void GLCD_Data(unsigned char data) //Writing Data on GLCD{ GPIO_Write(GPIOE,data); RS1; //make it RS to high RW0; //make it RW to low EN1; //enbale high DelayMs(10); EN0; //enbale low}

void DelayMs(int k){ unsigned int a; for(a=0;a<=k;a++);}

void GLCD_Init() //Graphic LCD Initialization function{

//unsigned char i; unsigned char Comd[5]={0xc0,0xb8,0x40,0x3f};//LCD Command list chip_select(1); //send commands to page1 for(i=0;i<4;i++) GLCD_Comd(Comd[i]); chip_select(0); //send commands to page0 for(i=0;i<4;i++) GLCD_Comd(Comd[i]);}

void chip_select(unsigned char Page) //SELCET CHIP. 0-63 is CS1 and 64-127 is CS2{ if(Page)

{ CS1_OFF; //Page 0 LCD IC1 CS2_ON;

} else

{ CS1_ON; //Page 1 LCD IC2 CS2_OFF;

}}

void GLCD_clear() //Clearing GLCD screen{

Page 41: Arm Report

int Page=0,i=0;int Column=0;

for (Page = 0; Page < 8; Page++) { chip_select(0); //Display part of image

to Page1 GLCD_Comd(0xb8 | Page); GLCD_Comd(0x40);

for (Column = 0; Column < 128; Column++) { if (Column == 64) { chip_select(1); //Display part of

image to Page0 GLCD_Comd(0xb8 | Page); GLCD_Comd(0x40); }

GLCD_Data(0x00); } }

}

void GLCD_drawline(unsigned int column, unsigned char A, unsigned char k) //Change here for method 1,2 and 3{ unsigned char page=0; //number of complete bars to be written // OR number of pages to be filled completely in a column (with FF) if(column==63) chip_select(0); //Display part of image to Page1 else if(column==127) chip_select(1);

if(k > 0) {

for(page=0;page<k;page++) // Compute value in form of x = k*256 + A; {

GLCD_Comd(0xb8 | page); GLCD_Comd(0x40 | column); GLCD_Data(0xFF); //Printing Line in selected page

and column. }

}

GLCD_Comd(0xb8 | page); GLCD_Comd(0x40 | column); GLCD_Data(A); //Putting Data on GLCD

Page 42: Arm Report

page = page +1; if(mode == 1) { while(page<8) {

GLCD_Comd(0xb8 | page); GLCD_Comd(0x40 | column); GLCD_Data(0x00); page++;

} }

}void GLCD_character_write(unsigned char disp){

unsigned char i=0;chip_select(0);GLCD_Comd(0xb8 | 7); //selecting pageGLCD_Comd(0x40 | 0); //select column

switch(disp){

case 1:for(i=0;i<38;i++)

GLCD_Data(font_yes[i]);break;

case 2:for(i=0;i<38;i++)

GLCD_Data(font_green[i]);break;

case 3:for(i=0;i<38;i++)

GLCD_Data(font_no[i]);break;

case 4:for(i=0;i<38;i++)

GLCD_Data(font_red[i]);break;

case 5:for(i=0;i<38;i++)

GLCD_Data(font_orange[i]);break;

case 6:for(i=0;i<38;i++)

GLCD_Data(font_stop[i]);break;

case 7:for(i=0;i<38;i++)

GLCD_Data(font_start[i]);break;

case 8:for(i=0;i<38;i++)

GLCD_Data(font_blue[i]);break;

case 9:

Page 43: Arm Report

GLCD_clear();break;

}

}

2)SOURCE CODE FOR USING THE ANALOG TO DIGITAL CONVERTER ON STM32F407VG Discovery boardFILENAME : adc.h

Description:- The ADC on the STM32F4 discovery board was used in the 12 bit mode- PC1 on the board was used in ANALOG mode and was fed the analog audio input

from the DC biasing circuit.

#include "usart.h"

/* Clock configuration for Port C. PC1 is used as Analog Input */void RCC_Configuration_adc(void) { RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOC, ENABLE); RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE);}

/* GPIO Configuration of PC1 as Analog Input */void GPIO_Configuration_adc(void){ GPIO_InitTypeDef GPIO_InitStructure;

/* ADC Channel 11 -> PC1 */

GPIO_InitStructure.GPIO_Pin = GPIO_Pin_1; GPIO_InitStructure.GPIO_Mode = GPIO_Mode_AN; //Analog mode GPIO_InitStructure.GPIO_PuPd = GPIO_PuPd_NOPULL ; // No Pull Up of PC1 GPIO_Init(GPIOC, &GPIO_InitStructure); //PC1 initialization}

/* ADC Initialization – Refer to comments for the modes and size of ADC*/void ADC_Configuration(void){ ADC_CommonInitTypeDef ADC_CommonInitStructure; ADC_InitTypeDef ADC_InitStructure;

/* ADC Common Init */ ADC_CommonInitStructure.ADC_Mode = ADC_Mode_Independent; ADC_CommonInitStructure.ADC_Prescaler = ADC_Prescaler_Div4; // Used for number of samples. Assign clock frequency (Clock frequency is Oscillator freq/4) ADC_CommonInitStructure.ADC_DMAAccessMode = ADC_DMAAccessMode_Disabled; ADC_CommonInitStructure.ADC_TwoSamplingDelay = ADC_TwoSamplingDelay_8Cycles; ADC_CommonInit(&ADC_CommonInitStructure);

Page 44: Arm Report

ADC_InitStructure.ADC_Resolution = ADC_Resolution_12b; //12 bit ADC channel ADC_InitStructure.ADC_ScanConvMode = DISABLE; ADC_InitStructure.ADC_ContinuousConvMode = DISABLE; ADC_InitStructure.ADC_ExternalTrigConvEdge = ADC_ExternalTrigConvEdge_None; ADC_InitStructure.ADC_ExternalTrigConv = DISABLE; ADC_InitStructure.ADC_NbrOfConversion = 1; ADC_InitStructure.ADC_DataAlign = ADC_DataAlign_Right;//Align 12 bit adc data from right to left in 16bits ADC_Init(ADC1, &ADC_InitStructure) //Initializing ADC

/* Enable ADC1 */ ADC_Cmd(ADC1, ENABLE);}

/* SOFTWARE FOR USING ADC *//* Begin ADC conversion. Each conversion polls for ADC status Flag and then begins the next conversion of Analog input into a 12 bit(written to a 16bt buffer) ADC output */int adc_convert(){

/* ADC1 regular channel 11 configuration */ADC_RegularChannelConfig(ADC1, ADC_Channel_11, 1,

ADC_SampleTime_15Cycles); // PC1

ADC_SoftwareStartConv(ADC1); //Start the conversionwhile(!ADC_GetFlagStatus(ADC1, ADC_FLAG_EOC));

//checking status flag if available for conversionreturn ADC_GetConversionValue(ADC1); //Return the converted

data}

3)SOURCE CODE USART INITIALIZATIONFILENAME: usart.h

Description: 1) GPIO Pins initialized such that PB10 is used as Tx and PB11 is used as Rx.

2) PB10 and PB11 constitute USART 3 channel3) Sending 8bits of Data on USART to the Termainal4) Interface used primarily for Debugging

#include <stm32f4xx.h>#include <stm32f4xx_gpio.h>#include <stm32f4xx_usart.h>#include <stm32f4xx_rcc.h>#include <misc.h>#include <string.h>

Page 45: Arm Report

#include <stdlib.h>#include <stdio.h>#include <stm32f4xx_adc.h>

int ConvertedValue = 0; //Converted value readed from ADCvoid RCC_Configuration(void){

RCC_APB1PeriphClockCmd(RCC_APB1Periph_USART3, ENABLE);RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOB, ENABLE);

}

void GPIO_Configuration(void){ GPIO_InitTypeDef GPIO_InitStructure;

/*-------------------------- GPIO Configuration ----------------------------*/ GPIO_InitStructure.GPIO_Pin = GPIO_Pin_10 | GPIO_Pin_11; GPIO_InitStructure.GPIO_Mode = GPIO_Mode_AF; GPIO_InitStructure.GPIO_OType = GPIO_OType_PP; GPIO_InitStructure.GPIO_PuPd = GPIO_PuPd_NOPULL; GPIO_InitStructure.GPIO_Speed = GPIO_Speed_50MHz; GPIO_Init(GPIOB, &GPIO_InitStructure);

GPIO_PinAFConfig(GPIOB, GPIO_PinSource10, GPIO_AF_USART3); GPIO_PinAFConfig(GPIOB, GPIO_PinSource11, GPIO_AF_USART3);}

void USART3_Configuration(void){ USART_InitTypeDef USART_InitStructure;

/* USARTx configuration ------------------------------------------------------*/ USART_InitStructure.USART_BaudRate = 9600; //baudrate setting USART_InitStructure.USART_WordLength = USART_WordLength_8b; USART_InitStructure.USART_StopBits = USART_StopBits_1; USART_InitStructure.USART_Parity = USART_Parity_No; USART_InitStructure.USART_HardwareFlowControl = USART_HardwareFlowControl_None; USART_InitStructure.USART_Mode = USART_Mode_Rx | USART_Mode_Tx; USART_Init(USART3, &USART_InitStructure);

USART_Cmd(USART3,ENABLE);}

/* SOFTWARE FOR USING USART */void putch(unsigned char c) //Putting character on Terminal through USART{

while (USART_GetFlagStatus(USART3, USART_FLAG_TXE) == RESET); USART_SendData(USART3, c);

}

Page 46: Arm Report

void putstr(char *ss) //Putting string on Terminal. Calls putch(){

while(*ss){

putch(*ss);ss++;

}}

4)SOURCE CODE USING EXTERNAL INTERRUPTFILENAME: EXT0_interrupt.h

Description: 1) External Push button is interfaced to PB0 2) The push button is used to switch between modes (mode

1,2,3)

#include "lcd.h"#include "stm32f4xx_exti.h"#include "stm32f4xx_syscfg.h"

/* Refernce : my.st.com forum for external intterupt handler*/

/* Using PB0 as input for external interrupt. PB0 is configured to EXT0 channel of the external interrupt */

int interrupt_initialize(void){ GPIO_InitTypeDef GPIO_InitStructure; EXTI_InitTypeDef EXTI_InitStructure; NVIC_InitTypeDef NVIC_InitStructure;

/* Enable GPIOB clock */ RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOB, ENABLE);

/* Enable SYSCFG clock */ RCC_APB2PeriphClockCmd(RCC_APB2Periph_SYSCFG, ENABLE);

/* Configure PB0 pin as input. PB0 is pulled up to detect a falling edge interrupt trigger*/ GPIO_InitStructure.GPIO_Mode = GPIO_Mode_IN; GPIO_InitStructure.GPIO_PuPd = GPIO_PuPd_UP; GPIO_InitStructure.GPIO_Pin = GPIO_Pin_0; GPIO_Init(GPIOB, &GPIO_InitStructure);

SYSCFG_EXTILineConfig(EXTI_PortSourceGPIOB, EXTI_PinSource0);

/* Configure EXTI Line0 */ EXTI_InitStructure.EXTI_Line = EXTI_Line0; EXTI_InitStructure.EXTI_Mode = EXTI_Mode_Interrupt;

Page 47: Arm Report

EXTI_InitStructure.EXTI_Trigger = EXTI_Trigger_Falling; EXTI_InitStructure.EXTI_LineCmd = ENABLE; EXTI_Init(&EXTI_InitStructure);

/* Enable and set EXTI Line0 Interrupt to the lowest priority */ NVIC_InitStructure.NVIC_IRQChannel = EXTI0_IRQn; NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0x01; NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0x01; NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE; NVIC_Init(&NVIC_InitStructure);}/* SOFTWARE SOURCE CODE FOR USING THE EXTERNAL INTERRUPT */void EXTI0_IRQHandler(void){

unsigned int d,i; GLCD_clear(); mode = mode+1; if(mode==5) mode=1;

putch('\n'); putch('\r'); print_buf("\n\r MODE : "); putch(mode+0x30); print_buf("\n\n\r");

// Printing Mode on GLCD chip_select(1); GLCD_Comd(0xb8 | 4); //selecting page GLCD_Comd(0x40 | 0); //select column for(i=0;i<49;i++) {

if(mode==1) GLCD_Data(font_mode1[i]);

else if(mode==2) GLCD_Data(font_mode2[i]); else if(mode==3) GLCD_Data(font_mode3[i]); else if(mode==4)

GLCD_Data(font_mode4[i]);

}

for(i = 0; i<100; i++) for(d = 0; d<65536; d++) asm("nop");

EXTI_ClearITPendingBit(EXTI_Line0);}

Page 48: Arm Report