74
Universidade de Aveiro Departamento de Electr´ onica,Telecomunica¸c˜ oes e Inform´ atica, 2016 Miguel Castro Migueis Vieira VLC based Position Estimation for Robotic Navigation Estima¸ ao da Posi¸ ao baseada em VLC para Navega¸ ao Rob´ otica

VLC based Position Estimation for Robotic Navigation

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Page 1: VLC based Position Estimation for Robotic Navigation

Universidade de AveiroDepartamento deElectronica, Telecomunicacoes e Informatica,

2016

MiguelCastro MigueisVieira

VLC based Position Estimation for RoboticNavigation

Estimacao da Posicao baseada em VLC paraNavegacao Robotica

Page 2: VLC based Position Estimation for Robotic Navigation
Page 3: VLC based Position Estimation for Robotic Navigation

Universidade de AveiroDepartamento deElectronica, Telecomunicacoes e Informatica,

2016

MiguelCastro MigueisVieira

VLC based Position Estimation for RoboticNavigation

Estimacao da Posicao baseada em VLC paraNavegacao Robotica

Dissertacao apresentada a Universidade de Aveiro para cumprimento dosrequisitos necessarios a obtencao do grau de Mestre em EngenhariaEletronica e Telecomunicacoes, realizada sob a orientacao cientıfica de Pro-fessor Doutor Pedro Fonseca e sob a coorientacao do Professor DoutorJose Luıs Costa Pinto de Azevedo, Professores do Departamento EletronicaTelecomunicacoes e Informatica da Universidade de Aveiro.

Page 4: VLC based Position Estimation for Robotic Navigation
Page 5: VLC based Position Estimation for Robotic Navigation

o juri / the jury

presidente / president Professor Doutor Telmo Reis CunhaProfessor Auxiliar, Universidade de Aveiro

vogais / examiners committee Professora Doutora Monica Jorge Carvalho de FigueiredoProfessora Adjunta, Departamento de Engenharia da Escola Superior de

Tecnologia e Gestao do Instituto Politecnico de Leiria (Arguente Principal)

Professor Doutor Pedro Nicolau Faria da FonsecaProfessor Auxiliar, Universidade de Aveiro (Orientador)

Page 6: VLC based Position Estimation for Robotic Navigation
Page 7: VLC based Position Estimation for Robotic Navigation

agradecimentos Gostaria de comecar por agradecer ao meu orientador, Prof. PedroFonseca da Universidade de Aveiro, por me ter dado a conhecer umaarea com grande potencial , por todo o apoio prestado em todo otrabalho, por me ter deixado ter o prazer e a oportunidade de apresentaro artigo publicamente, por ter insistido para que o trabalho ficasserigoroso e conciso. Gostaria de agradecer tambem ao Prof. Luıs Nero,Prof. Joao Paulo Barraca e ao Nuno Lourenco por me terem ajudadoa corrigir alguns erros encontrados durante as apresentacoes e ao meucoorientador Prof. Jose Luıs Azevedo por toda a ajuda na criacao dabase rotativa do prototipo.

Queria agradecer aos meus pais por me dizerem que o trabalho estavamuito bonito, so para nao ouvirem a explicacao e que gracas a eles eque isto foi possıvel. Quero tambem agradecer ao meu irmao AndreVieira, Professor na Universidade de Sao Carlos, Brasil por me terajudado a corrigir algumas falhas. Um grande abraco para todos osmeus amigos que apoiaram, me faziam rir e me ouviram em todos osmomentos e ao pessoal do laboratorio que disponibilizava momentosde diversao. Quero finalmente agradecer a Rita Vale, por todo o apoio,pela atencao, pelo tempo perdido a corrigir a minha dissertacao e porme erguer a cabeca e dizer para continuar.

A todos voces, um muito obrigado!

Page 8: VLC based Position Estimation for Robotic Navigation
Page 9: VLC based Position Estimation for Robotic Navigation

Resumo A cada vez mais frequente utilizacao de LED como iluminacao artificialfavoreceu o desenvolvimento de um posicionamento indoor baseadoem luz visıvel. Nesta dissertacao foi feito um levantamento sobre asestrategias e os sensores utilizados neste tipo de posicionamento. Paratal, propomo-nos a estimar a posicao de um robot atraves de comu-nicacao por luz visıvel, usando um prototipo desenvolvido em trabalhosanteriores.

O trabalho desenvolvido organizou-se em quatro etapas. Numaprimeira fase, verificou-se que o prototipo mencionado se adequa paraestimar a posicao. As limitacoes deste, levaram a criacao de um ambi-ente de simulacao onde foram estudadas estruturas semelhantes. Nesteseguimento, procedeu-se a comparacao de resultados obtidos no simu-lador com experiencias similares utilizando o prototipo. Posteriormentefoi implementado um modelo de ruıdo no ambiente de simulacao per-mitindo testar a influencia do mesmo na posicao estimada.

Os resultados obtidos demonstram ser possıvel implementar um sis-tema de posicionamento por luz visıvel utilizando um simples sensorcomposto por alguns foto-dıodos posicionados sobre uma calote hem-isferica, representando uma solucao de baixo custo para posiciona-mento por luz visıvel. Na comparacao feita entre os resultados obtidospelo simulador e o prototipo verificou-se que este consegue apresentaruma resposta identica ao prototipo. Com a implementacao do mod-elo de ruıdo, os resultados apresentam um erro de poucos centımetros.Concluımos que o campo de visao dos foto-dıodos tem um papel muitoimportante quando a posicao e estimada. O campo de visao do sen-sor deve ser grande o suficiente para intercetarem outros para prevenirpontos cegos, mas nao demasiado grande uma vez que levaria a errospor todos os sensores receberem sinal.

Page 10: VLC based Position Estimation for Robotic Navigation
Page 11: VLC based Position Estimation for Robotic Navigation

Abstract The widespread use of LED as artificial illumination lead to the de-velopment of indoor positioning systems using visible light. On thiswork we gathered information on strategies and sensors used in visiblelight positioning (VLP). As such, we propose to estimate a robots po-sition based on visible light communication (VLC) using a prototypedeveloped in previous works.

The work was divided in four stages. Initially, we verified that the pro-totype used was suitable to estimate its position. In order to overcomethe prototype’s limitations, a simulation environment was developed,where similar structures were tested. This allowed the comparison be-tween the results obtained using the prototype and those from thesimulator. At last, a noise model was implemented on the simulator toverify its influence on the position estimation.

The results show the viability of implementing VLP using a simplesensor based on a set of photo-diodes placed over a hemisphericaldome, yielding a low-cost solution for VLP. When comparing the resultsobtained with the prototype and the simulator, we verified that theresponse is identical. With the implementation of the noise model, theresults show an error of a few centimetres. We concluded that thephoto-diodes field of view is important when the position is estimated.The sensors field of view should be big enough to intercept others inorder to prevent blind spots but not too big since it would lead to errorsbecause all sensors would receive signal.

Page 12: VLC based Position Estimation for Robotic Navigation
Page 13: VLC based Position Estimation for Robotic Navigation

Contents

Contents i

List of Figures iii

List of Tables v

Acronyms vii

1 Introduction 11.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Dissertation Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 State of the Art 52.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Structures and approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Chapter Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Experimental setup and results analysis 133.1 Position Estimation with the Prototype . . . . . . . . . . . . . . . . . . . . 13

3.1.1 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.1.2 Position estimation algorithm . . . . . . . . . . . . . . . . . . . . . 143.1.3 Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.1.4 Results analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.2 Simulation Environment for VLP . . . . . . . . . . . . . . . . . . . . . . . 203.2.1 Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2.2 Adapt the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 243.2.3 Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.4 Results analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.3 Simulation Environment Validation . . . . . . . . . . . . . . . . . . . . . . 263.3.1 Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3.2 Results analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

i

Page 14: VLC based Position Estimation for Robotic Navigation

3.4 Noise Model Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 303.4.1 Noise in the PD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.4.3 Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.4.4 Results analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.5 Chapter Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Conclusion and Future Work 414.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Bibliography 43

A Functions and Scripts 46A.1 Tables of functions and scripts used . . . . . . . . . . . . . . . . . . . . . . 46

B Data 49B.1 Data received from the first and fourth quadrant . . . . . . . . . . . . . . . 49

ii

Page 15: VLC based Position Estimation for Robotic Navigation

List of Figures

2.1 Diagram of VLC multiple applications and different ways to implement VLP. 52.2 PDA10A photo-diode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Possible scenario to use VLP . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Visible light indoor imaging optical wireless system model . . . . . . . . . 92.5 Visible light positioning system model using three light sources . . . . . . . 92.6 LED structure model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1 Azimuth (θ) and elevation (φ) representation. . . . . . . . . . . . . . . . . 133.2 Photodiode circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3 Transmitter circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.4 Used Prototype composed of 8 Photo-detectors and a microcontroller. . . . 153.5 Test procedure loop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.6 Floor of the room with the test points marked. . . . . . . . . . . . . . . . . 173.7 Signal received in all PDs in position (1.0, -0.5) with an azimuth of 70◦. . . 173.8 Measured position compared with the real position. . . . . . . . . . . . . . 183.9 Error associated to each test point. . . . . . . . . . . . . . . . . . . . . . . 193.10 Structure with a step motor. . . . . . . . . . . . . . . . . . . . . . . . . . . 193.11 The two main modules from the simulation environment. . . . . . . . . . . 203.12 Incidence angle and field-of-view. . . . . . . . . . . . . . . . . . . . . . . . 213.13 A VLP sensor with 8 meridians and 3 parallels. . . . . . . . . . . . . . . . 223.14 Error map, using largest sum. . . . . . . . . . . . . . . . . . . . . . . . . . 263.15 Error map, using most non-zero values. . . . . . . . . . . . . . . . . . . . . 273.16 Maximum and average error as a function of FOV. . . . . . . . . . . . . . 283.17 Right angle triangle made with information provided by the sensors. . . . . 293.18 Signal received by the Photo-detectors . . . . . . . . . . . . . . . . . . . . 303.19 Signal received by the PDs in the simulator. . . . . . . . . . . . . . . . . . 313.20 Both real and simulated results overlap. . . . . . . . . . . . . . . . . . . . 323.21 Map of points with the results from real and simulated scenario. . . . . . . 343.22 Photo-Detector current. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.23 Hemispherical dome with meridians and parallels represented. . . . . . . . 353.24 The error of a structure 6x24 with different FOV varying the SNR. . . . . 363.25 The error of a structure 8x32 with different FOV varying the SNR. . . . . 363.26 The error of a structure 10x40 with different FOV varying the SNR. . . . . 37

iii

Page 16: VLC based Position Estimation for Robotic Navigation

3.27 The three different configuration with a FOV of 10◦. . . . . . . . . . . . . . 373.28 The three different configuration with a FOV of 20◦. . . . . . . . . . . . . . 383.29 The three different configuration with a FOV of 30◦. . . . . . . . . . . . . . 383.30 The three different configuration with a FOV of 40◦. . . . . . . . . . . . . . 39

iv

Page 17: VLC based Position Estimation for Robotic Navigation

List of Tables

3.1 Intensity received in each sensor in all azimuth angles . . . . . . . . . . . . 183.2 The maximum, average and minimum values of error . . . . . . . . . . . . 193.3 Comparison of methods for choosing the best line for azimuth computation. 253.4 Values used to get the real FOV and elevation angle of the prototype . . . 273.5 Comparison between the elevation and azimuth from experience and the

validation framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.6 The average error of all configuration with the different FOVs . . . . . . . 34

A.1 Table of functions used for all tests preformed . . . . . . . . . . . . . . . . 46A.2 Table of scripts used for all tests preformed . . . . . . . . . . . . . . . . . . 47

B.1 Signal received in each PD in first quadrant . . . . . . . . . . . . . . . . . 50B.2 Signal received in each PD in first quadrant . . . . . . . . . . . . . . . . . 51B.3 Signal received in each PD in first quadrant . . . . . . . . . . . . . . . . . 52B.4 Signal received in each PD in fourth quadrant . . . . . . . . . . . . . . . . 53

v

Page 18: VLC based Position Estimation for Robotic Navigation

vi

Page 19: VLC based Position Estimation for Robotic Navigation

Acronyms

AOA Angle of Arrival

FOV Field of View

GPS Global Positioning System

HFOV Half Field of View

HTM Homogeneous Transformation Matrix

IPS Indoor Positioning System

IR Infrared

LBS Location Based Systems

MIMO Multiple Input Multiple Output

PD Photo-diode

RF Radio Frequency

RSSI Received Signal Strength Indication

SNR Signal Noise Ratio

TDOA Time Difference of Arrival

TOA Time of Arrival

VLC Visible Light Communication

VLP Visible Light Positioning

vii

Page 20: VLC based Position Estimation for Robotic Navigation

viii

Page 21: VLC based Position Estimation for Robotic Navigation

Chapter 1

Introduction

1.1 Context

Visible Light Positioning (VLP) is emerging as a solution for indoor localization. In-terest on VLP has risen, among other reasons, as a result of the dissemination of LEDillumination [1].

Studies on positioning systems for Location Based Services (LBS) have been focusedon the Global Positioning System (GPS), infrared, radio frequency (RF ), ultrasound andmore recently, Visible Light Positioning (VLP ). GPS is an efficient system for trackingor determine the location of various objects in open surroundings. However, it is unableto provide location inside buildings [2–4]. The use of an indoor positioning system inlarge places such as hypermarkets, hospitals, airports, among others, will simplify navi-gation [5–8]. Indoor positioning systems, such as infrared (IR), RF and ultrasound havesome constraints. To use IR based system requires building an infrastructure to placethe receivers and wires, incurring in additional costs [5, 9]. RF-based methods are usedon a daily basis in cell-phone communications, computer wireless networks, GPS, wirelesscommunication, among others. Other restrictions in radio based systems include multi-path propagation, difficult deployment [5], the requirement for extremely accurate timemeasurements, the relatively poor accuracy of indoor positioning achievable by RF-basedtechniques and electro-magnetic interference [1]. As IR, ultrasound requires building aspecific infrastructure [10, 11]. Ultrasonic high-precision indoor positioning system (IPS)uses Direct Sequence Code-Division Multiple Access (DS-CDMA) techniques, but this isalso a disadvantage because of the non-power controlled CDMA-based systems, along withsensitivity to the Doppler effect due to the speed of sound and distortions affected by theresponses of the transducer [12]. Compared to these methods, indoor positioning systemsusing Visible Light Communication (VLC) present several advantages. It can be imple-mented at low price and relying on an existing LED-based illumination infrastructure,requiring a transmitter (light source) and a receiver (photo-detector, PD) [3]. It can alsobe used in RF sensitive areas such as hospitals and aircrafts. In addition, several authorsrefer that there is little influence on multipath interference caused by non-line of sight

1

Page 22: VLC based Position Estimation for Robotic Navigation

propagation [5, 13].

Techniques for VLP are based on triangulation, trilateration and fingerprinting [14],which, in some cases, are combined with other localization methods. Triangulation is theprocess of determining the location of a point by measuring angles to it from known pointsat either end of a fixed baseline [14]. A typical example for triangulation is the angle ofarrival (AOA) method, this determines the direction by measuring the Time Differenceof Arrival (TDOA) at individual elements of the array, from these delays the AOA canbe calculated. [1, 15]. Trilateration is the process of determining the location of pointsby measurement of several distances to a reference point using geometry [14]. This canbe reached by some methods as time of arrival (TOA), time difference of arrival (TDOA)and received signal strength Indication (RSSI) [1]. Fingerprinting consists on estimatingthe relative position by matching the data from online measurements with pre-measuredlocation-related data [16]. VLC uses LED for its luminosity due to their reachable cost,low power consumption and low heat generation. Transmitting digital signal using LED,is a developing communication technology and has a broad prospect of applications forindoor communications [1, 2, 4, 5]. There are several standards for VLC, including IEEE802.11 IP PHY, IEEE 802.15.7, and JEITA CP-1221. However, while these are accepted asan important concern of VLC and Visible Light Positioning (VLP), there are researchersthat do not use the norms above. VLP systems can be designed with a number of differentarchitectures, depending on the study purpose [1].

1.2 Motivation

Nowadays, LED is widely used as artificial illumination mostly due to its advantages.Not only it has low power consumption, low heat generation and high light intensity butalso, LED allows light modulation as data broadcaster and LED luminaries can work asbeacons for the indoor robot positioning. These reasons together with the widespread useof this illumination are opening doors to Visible Light Positioning systems.

1.3 Proposal

Researchers have been looking for the most reliable and precise indoor positioningsystem. The increasing use of LED and therefore the development of VLP makes this apromising navigation system. As such, the aim of this work is to estimate the position ofa robot platform using Visible Light Communication. For that we will use a prototypecrated by [17].

This work is based on previous developments made by our research team, documentedin [17]. In this dissertation, we investigate the application of the sensor developed in [17]to the problem of determining the location of an object in an indoor environment, namelyby studying the algorithms for computing the object location and the performance of thesealgorithms, in terms of location error.

2

Page 23: VLC based Position Estimation for Robotic Navigation

1.4 Dissertation Structure

This dissertation is divided in four chapters. Chapter 1 consists of an introductionto this work, the motivation and the thesis proposal. Chapter 2 summarizes what hasbeen done so far on VLP, namely the sensors , structures, and algorithms used as wellas different approaches. A general overview of all experiments, their methodologies andresults analysis are explored on Chapter 3. Finally, Chapter 4 presents the conclusions ofthis work and also some future work notes.

3

Page 24: VLC based Position Estimation for Robotic Navigation

4

Page 25: VLC based Position Estimation for Robotic Navigation

Chapter 2

State of the Art

This chapter reviews the literature relevant for this dissertation work aimed to betterunderstand Visible Light Positioning, knowing the different sensors, structures, approachesand algorithms used. Figure 2.1, represents the different applications for VLC and in howmany ways VLP can be implemented.

Figure 2.1: Diagram of VLC multiple applications and different ways to implement VLP.

2.1 Sensors

Sensors are commonly used in robotics since by processing the data received they allowrobots to interact with their surroundings. The choice of sensor will depend on the finalpurpose of the study. A photo-diode (PD), an image sensor or a mobile phone camera areexamples of sensors used in VLP [16]. We will focus on the PD since it presents severaladvantages, for intance, its low cost, fast response time, low noise, usable with almost anyvisible or near infrared light source as LEDs, neon, flurescent, among others. Its aplicabilityis vast, it can be used in cameras as photographic flash control, in safety equipment as smokedetectors, in industry as position sensors, in communications for optical communications

5

Page 26: VLC based Position Estimation for Robotic Navigation

and VLP [18]. It is a semiconductor device that generate an output which is proportionalto light intensity.

There are also more complex devices based on PD, such as the PDA10A used by [19](figure 2.2), which can be fixed or moving, with different degrees of freedom. Note thateven though most the sensors used are similar, what differentiate the indoor positioningsystems are the structures and the different approaches to achieve the same goal.

Figure 2.2: PDA10A photo-diode.

2.2 Structures and approaches

Visible light positioning can be used in different scenarios: in a warehouse, where thestacker uses the artificial illumination (figure 2.3) to store packages, in a hospital to locatea wheelchair or even in a museum detecting the guided tour headphones [1]. For that tobe possible it is necessary to establish communication strategies. It is necessary to definehow to transmit the signal and how it will be interpreted.

Figure 2.3: Possible scenario to use VLP [1].

6

Page 27: VLC based Position Estimation for Robotic Navigation

Multiple input multiple output (MIMO) is a possible way to broadcast wireless infor-mation using several transmitters and receivers. MIMO has become an essential elementof wireless communication, having more capacity in its channels than single input singleoutput. For an indoor positioning system, MIMO allows the alignment required for suchan interconnect to be achieved in the receiver as it is not necessary that light from a sourceprecisely strikes a single detector [13]. It is used to minimize errors and optimize dataspeed. Besides MIMO, other ways to transmit information are used, as dimmed LEDs viapulse with modulation (PWM) [20], intensity modulation direct detection (IM/DD), theuse of optical code division multiple access (OCDMA) [19] and frequency division multipleaccess (FDMA) [21]. As PWM, on-off keying (OOK) and pulse-position modulation (PPM)have been used in VLC single channel systems as modulation methods. OOK denotes thesimplest form of amplitude-shift keying (ASK) modulation that represents digital data atthe presence or absence of a carrier wave. In its simplest form, the presence of a carrierfor a specific duration represents a binary one, while its absence for the same durationrepresents a binary zero. Some more sophisticated schemes vary these durations to conveyadditional information. It is analogous to unipolar encoding line code [22]. PPM is a formof signal modulation in which M message bits are encoded by transmitting a single pulsein one of 2M possible required time-shifts. This is repeated every T seconds, so that thetransmitted bit rate is M/T bits per second. It is primarily useful for optical communi-cations systems, where it tends to have little or none multipath interference [23]. Whenilluminated LEDs are densely arranged, inter-cell interference may be a serious problem forsingle channel systems. In order to overcome that interference, a carrier allocation VLCsystem is a possible solution as [24] proposed. IM is a form of modulation in which the op-tical power of a source is varied in accordance with some characteristics of the modulatingsignal. The IM/DD method is usually used in VLC due to the simple implementation of alow cost receiver, the possibility of modulation formats and high data rate compared withVLC based on image processing. Variation in the intensity of light emitted from a LEDcan be converted to current variation by a photo-detector (PD), so, the modulated signalcan be recovered at the receiver [24]. In OCDMA, an optical code (OC) represents an useraddress and signs each transmitted data bit, which is based on the principle that codesare mapped to the identities or addresses of users following a code-user relation [25]. Inthe case of FDMA, it consists on the division of the frequency band allocated for wirelesscommunication into thirty channels, each of which can carry digital data, it can be usedwith both analog and digital signal [26].

Even with many techniques to transmit information, the best way to interpret datais still being studied. Regarding this, many of indoor positioning systems use receivedsignal strength indication (RSSI) to estimate the position [2,14,19,20,27,28]. The receivedpower drives from the Lambertian radiation pattern of the LED, the LED’s height and thedistance between the LED and a mobile station. Given all the necessary parameters andthe signal power at the receiver, trilateration is used to calculate distances between LEDsand a target [16]. Using this technique and knowing the LED position, very accuratepositioning can be achieved [1]. Time of arrival (TOA) is another technique frequentlyused in localization, and it is the base of the GPS system. Nevertheless, it requires the

7

Page 28: VLC based Position Estimation for Robotic Navigation

transmitted signals to be very accurately synchronized [1]. TOA works by calculating thedistances between LEDs and the target from the arrival time of signals and then uses theseestimated distances to derive the position of the target. For visible light, the distanceis calculated by directly multiplying the propagation delay of the signal by the speedof light. Annother techique is named time difference of arrival (TDOA) which helps toestimate position by supposing that all the LEDs transmit signals to the receiver, due tothe difference in the distance from the receiver to the LEDs, the time at which signal arrivesat the receiver is different [16]. Another method for VLP systems is the angle of arrival(AOA). After obtaining the angle of arrival the position of the target is determined as theintersection of the multiple beams [16]. Positioning based on AOA it not frequently used inradio-based systems because of obstacles between the transmitter and receiver. For VLPthis is different, the receiver will always have line of sight with a number of light bulbs.AOA-based positioning systems are promising for VLP not only because lenses with precisedesigns don’t have large costs associated, but also because the resulting error associatedto light reflection on walls is relatively small [1]. Compared with other techniques used onVLP, AOA does not require synchronization between LEDs and can be used to estimatethe position in 2D or 3D [16].

Knowing the amount of possible ways to transmite data using visible light and how thereceivers can intrepret that data to estimate the target position, it is important to knowwhere the LEDs are usually placed and the different types of strucutres used to the positionestimation. Usually LEDs are placed on the ceiling forming a square with four LEDs(Figure 2.4), a triangle (Figure 2.5) and in some cases a single LED [3,27]. Reversing theusual approach, [14] proposed a mobile station composed of LEDs in different angles (Figure2.6) and distribute photo-diodes on the ceiling. Their approach along with accelerometersresults in a more accurate positioning system, however, it is not practical. [27] proposedan indoor localization system using multiple optical receivers, these were oriented withdifferent elevation and azimuth, forming a dome, and a structure composed of a singlePD with a fixed elevation, that rotates to three different azimuth angles [3]. Even if thehardware represents a significant part for VLP, the software, more precisely the algorithm,is also important.

2.3 Algorithms

Several studies use algorithms to estimate the target position and, even though it is ofthe utmost importance, some authors do not describe them. In this way, we summarized allthe information gathered on this subject. Jia et al. in [2] used the minimum mean squareerror algorithm and the maximum likelihood algorithm. The minimum mean square erroralgorithm presents low complexity. If the calculated distance to a light source is differentthan the real distance, the error of position will be high. The maximum likelihood algorithmuses the probability density function. In a real scenario, the probability density functioncan be obtained by the experimental analysis beforehand or calculated by applying somewireless transmission model. The maximum likelihood is more complex compared with the

8

Page 29: VLC based Position Estimation for Robotic Navigation

Figure 2.4: Visible light indoor imaging optical wireless system model ([13]).

Figure 2.5: Visible light positioning system model using three light sources ([24]).

Figure 2.6: LED structure model ([14]).

9

Page 30: VLC based Position Estimation for Robotic Navigation

minimum mean square error and is not appliable in all cases. The authors also considereda range free VLC based positioning method and a hybrid positioning method based onVLC and RSSI. In range free VLC based positioning method, the power received is notmeasured or calculated. With this method, multiple LED bulbs need to be set as referencenodes. Each node has an ID and when the receiver detects the ID for multiple light nodes,it is possible to estimate its position. This case is limited since each LED has a receivablearea. In the hybrid method, it is used the range free positioning method to obtain thepossible position of the receiver and then it is used the minimum mean square error ormaximum likelihood algorithms to estimate the final position.

In [27], Yang et al. separate the positioning algorithm in two parts (2D and 3D): in the2D algorithm, every optical receiver obtains the power from the light source. To estimatethe position, the height component is given and the transmission distance, the angle gainand the incidence angle gain of the optical receiver are used. The 3D algorithm consists inpositioning in the xOy plane by taking the 2D algorithm results and, to compensate theheight component, using the information from the transmitter.

Ganti et al. in [28] compared the performance of two tracking algorithms (Kalman filterand Particle filter) for an indoor positioning system. The positioning algorithm that theyused is an asynchronous method that uses RSSI measurements of light signal emitted bymodulated LEDs. The RSSI measurements are translated into pseudo-ranges from LEDbulbs to the receiver and this can be located after trilateration. They concluded that thekalman filter is simple and gives accurate measurements. However, when wild values wereintroduced into the measurements the particle filter is more resistant, therefore it can trackmore precisely.

In [20], Li et al. uses trilateration to calculate the receiver position from distancemeasurements to multiple light sources. Their LEDs were dimmed via PWM to carrydigital information and their geometrical information was based in RSSI. By measuringthe received signal strength they calculate the distance to the light and the incidence andirradiation angles. They first addressed the normal cases with sufficient light sources andthen the challenging cases with insufficient sources.

Kuo et al. in [29] used a localization algorithm based on AOA and an ideal camerawith a biconvex lens. An important property of a biconvex lens is that if a ray of lightpasses through the center of the lens, it is not refracted. If the transmitter forms an anglewith the receiver the image at the receiver drifts out of the origin, because of the lens andknowing the distance to the origin is possible to know the incidence angle. Their algorithmassumes that transmitters locations are known and since they use a smartphone camera theposition estimation is based on the images from the camera (pixel size and focus length)and uses the minimum mean square error as an optimization. In their images, they needto identify a light source and match its identity with a map of global coordinates.

10

Page 31: VLC based Position Estimation for Robotic Navigation

2.4 Chapter Remarks

Due to the different types of light modulation, geometry information, sensors, structuresand even the algorithm there is no single way to implement visible light positioning sincethis will depend on the final purpose of the study. In the next chapter we present the testsperformed and their methodologies using the prototype developed by [17]. The prototypeis composed of eight photo-diodes at different elevation angles. The algorithm used toestimate the position is based on the received signal strength in each photo-diode andsimple trigonometry.

11

Page 32: VLC based Position Estimation for Robotic Navigation

12

Page 33: VLC based Position Estimation for Robotic Navigation

Chapter 3

Experimental setup and resultsanalysis

In this chapter we present the experiments performed, their methodologies and resultsanalysis.

3.1 Position Estimation with the Prototype

This experiment is based on position estimation using the prototype characteristicsand its orientation. Throughout this chapter the terms azimuth and elevation will appearfrequently. The azimuth θ represents the rotation from the initial position around zz axis(figure 3.1). The elevation φ is the angle where the intensity received is higher or where aPD was placed(figure 3.1).

Figure 3.1: Azimuth (θ) and elevation (φ) representation.

13

Page 34: VLC based Position Estimation for Robotic Navigation

3.1.1 Prototype

The prototype devoloped in [17] (figure 3.4) it is composed of eight photo-diodes (re-ceivers) and a microcontroller (PIC32) which processes the data. Each circuit was builtindividually (Figure 3.2) and placed in the prototype structure manually.

Figure 3.2: Photodiode circuit ([17]).

The transmitter is composed of three LEDs and a MOSFET that works as a switch(figure 3.3).

Figure 3.3: Transmitter circuit [17]).

In the equipment presented in the figure 3.4, each sensor was placed at a differentelevation angle. Those angles were 90.0, 79.0, 68.5, 58.5, 48.0, 37.5, 27.5 and 18,0 degreesfrom the top to the bottom. These angles were used in the position estimation algorithmdescribed in the next subsection.

3.1.2 Position estimation algorithm

The algorithm used to estimate the position is based on RSSI, simple trigonometry andweighted average. The algorithm is composed by two parts, as detailed next. The firstpart corresponds to selection of the relevant signal and the second is to use the informationfrom all measures and estimate the position.

Each PD receives the signal according to its elevation angle, the angle with the lightbulb and the distance to it. With the information in each PD the signal amplitude, theaverage and the frequency were calculated using the script proceV aloresP ic (table A.2).

14

Page 35: VLC based Position Estimation for Robotic Navigation

Figure 3.4: Used Prototype composed of 8 Photo-detectors and a microcontroller.

To distinguish relevant from irrelevant information, strategies based on variance, average,maximum of the signal and weighted average (function mediaPonderada table A.1) wereused. To avoid a discrepancy of values, this approach, benefits the angles with more signalamplitude and rejects the angles with insignificant amplitude.

As mentioned before the algorithm uses weighted average to approximate the elevationand azimuth angles. The equations used are given by,

θ =

∑ni=1wi · θi∑n

i=1wi

(3.1)

φ =

∑ni=1wi · φi∑n

i=1wi

(3.2)

In equation (3.1), θ is the azimuth’s weighted average and it is used to estimate the lightsource azimuth , θi represents the different azimuth angles and wi, their signal amplitude.The equation (3.2) φ is the weighted average of elevation angle, φi the different elevationangles with relevant signal. Using the data from each measurement we are able to calculatethe azimuth angle using (3.1), with (3.2) the final elevation and we determined the positionusing the following equations.

d =h

tan(φ)(3.3)

x′= cos(θ) · d (3.4)

y′= sin(θ) · d (3.5)

In equation (3.3), d is the distance from the prototype to the light source, with a heighth. The estimated position is the result of equation (3.4) and (3.5) obtaining x

′and y

′,

respectively.

15

Page 36: VLC based Position Estimation for Robotic Navigation

Afterwards the estimated position hold an error compared to the correct position. Theerror was obtained using the Euclidean distance (3.6) between the measured point and realpoint, where x and y represent the real position and x

′and y

′the estimated one.

deuc =√

(x− x′)2 + (y − y′)2 (3.6)

To gather all the information needed to estimate the position, it is necessary to followthe procedure described next.

3.1.3 Test Procedure

The procedure for this experiment is represented in figure 3.5. Before turning on thelight source it is necessary to set some configurations and prepare the room. The LED wasplaced at a height of 2.75 m and used a function generator to generate a square wave with5 KHz. The frequency was relatively high so it wouldn’t disturb us during the experiment,since at lower frequencies one can see the LED blinking. Its light intensity can be controlledby changing the supplying voltage.

Figure 3.5: Test procedure loop.

The signal was configured with an amplitude of 5 V and an offset of 2.5 V. The powersupply was configured to be between 8 V and 11 V, where 11 V represents the maxi-mum allowed intensity, more than that can damage the LED. These settings follow therecommendations in [17].

In the room was necessary to mark on the ground the place under the transmitter (lightsource). This mark was the beginning of the benchmark (0.0 ; 0.0). Afterwards we marked20 test points on the ground (Figure 3.6), but just 16 of them were used. Each pointrepresents a different position.

After placing the marks on the ground, we design a circumference with the same di-ameter of the prototype on a cardboard where we marked angles from -90 to 90 degrees in

16

Page 37: VLC based Position Estimation for Robotic Navigation

Figure 3.6: Floor of the room with the test points marked.

multiples of 10 apart from 45 and -45 degrees. To collect the data we placed the prototypeand the cardboard over a test point aligned with the xx axis.

3.1.4 Results analysis

The data received at a given position was used to obtain the azimuth, the elevationand the estimated position. The figure 3.7 shows the intensity received by the prototypein (1.0, -0.5) position rotated 70◦. It is possible to observe that only the photo-diode at68.5◦ has relevant signal. The signal in the other PDs was considered irrelevant.

Figure 3.7: Signal received in all PDs in position (1.0, -0.5) with an azimuth of 70◦.

Table 3.1 represents all the maximum intensities collected in one specific point (1.0,

17

Page 38: VLC based Position Estimation for Robotic Navigation

-0.5) of 16 test points. The other points intensities are presented in annex B.

Table 3.1: Intensity received in each sensor in all azimuth anglesPosition Azimuth Signal Amplitude in the different PDs(m) (degrees) 90◦ 79◦ 68.5◦ 58.5◦ 48◦ 37.5◦ 27.5◦ 18◦

(1.0;-0.5) 0 4.5 4.0 4.5 4.0 3.0 3.0 3.5 3.510 4.5 4.0 4.5 4.0 3.0 3.0 3.5 3.520 4.5 4.0 4.5 4.0 3.0 3.0 3.5 3.530 4.5 4.0 3.5 4.0 4.0 3.5 4.0 2.540 4.5 4.0 3.5 4.0 4.0 3.5 4.0 2.545 4.5 4.0 6.0 4.0 4.0 3.5 4.0 2.550 4.0 3.5 26.5 4.0 4.0 3.5 3.5 4.060 4.5 4.0 43.5 5.0 4.0 4.0 3.5 3.570 4.5 4.0 43.0 5.0 3.5 2.5 3.5 3.580 4.5 3.5 24.0 3.5 3.0 3.5 4.0 3.590 4.5 5.0 4.5 4.5 4.5 4.5 3.5 4.5

Figure 3.8 shows how the estimated points are located compared to test points and thelight source. Figure 3.9 represents the error associated to each estimated position.

Figure 3.8: Measured position compared with the real position.

Using the error from figure 3.9, we calculate the average error as shown in table 3.2.The result obtained was an average error of 11 cm and a maximum error of 28 cm. Theaverage error was calculated using arithmetic average from the errors in figure 3.9.

This experiment can be divided in two parts. One part corresponds to the azimuth andthe prototype orientation and the other part to the elevation. Regarding the error and theestimated angles (azimuth and elevation), rotating the prototype 10◦ to 10◦ can be toomuch because of the prototype’s field of view (FOV). By improving the resolution morearea will be covered by the PD’s FOV and with more data the weighted average will reducethe discrepancy in the azimuth values. To solve this problem instead of using a quarterof quadrant divided by 10 we could use a quarter of quadrant divided by 20 or more. To

18

Page 39: VLC based Position Estimation for Robotic Navigation

Figure 3.9: Error associated to each test point.

Table 3.2: The maximum, average and minimum values of errorError (m)

Max Avg. Min0.282 0.107 0

facilitate the use of the prototype in this conditions a step motor can be attached to theprototype rotating it when needed and so saving time (figure 3.10). We started to build aplatform with a step motor, however this was not concluded since a simulation environmentwas being developed where this platform could be represented. Other possible problem wasthat the cardboard may not be completely aligned, since we rotate the prototype over it,this can introduce error to the azimuth angle. This error can be reduced using strategiesfor a better alignment or verify if it is a systematic error that increase with the distanceto the axis. If this takes place the error can be cancelled in the final calculation.

Figure 3.10: Structure with a step motor.

The last problem detected was the elevation angle. In most of the results only one PDreceives signal, thereby the estimated elevation angle is equal to the elevation of that PD(the average is computed with a single value). These results show that the field of view

19

Page 40: VLC based Position Estimation for Robotic Navigation

is not enough to obtain a correct elevation using weighted average. After completing thisexperiment we concluded that with the available prototype it was very difficult to changethe amount of PDs, respective position and FOV.

3.2 Simulation Environment for VLP

Due to the problems detected during the previous section, a simulation environment forVLP was created to overcome the lack of flexibility of the prototype used before by allowingus to change the number of sensors and their FOV. That way it is possible to discover newstructures, configurations, verify the response with different sensors, among others, usinga flexible simulator. The simulation environment comprises two main modules: modellingand positioning (Figure 3.11). Modelling is responsible for the mathematical representa-tion of all system entities in 3D space and the simulation of the sensor behaviour. Thepositioning module receives the output of the previous module, representing the sensorsresponse, and estimates the position based on the algorithm described in the previoussection. Each module will be detailed in the next subsections.

This work was presented in IEEE International Conference on Autonomous RobotSystems and Competitions (ICARSC) in 2016 [30].

Figure 3.11: The two main modules from the simulation environment.

20

Page 41: VLC based Position Estimation for Robotic Navigation

3.2.1 Modelling

The sensor for VLP consists on a set of identical photo-detectors (PD) placed on ahemispherical dome following a regular pattern. The optical power Pi received by PD i isgiven by (3.7), where Poi is the optical power received when emitter and receiver are alignedand 1 meter apart, di is the distance from the light source to the PD, αi is the incidenceangle with respect to the normal vector (axis) of the PD, S(.) is the directional sensitivityfunction which accounts for the decrease in sensitivity with the increase in the incidenceangle and Π(.) is the rect function. The half field-of-view (HFOV), denoted by Ψ1/2, isdefined as the maximum deviation from the main axis where the source light is detectedby the PD. The incidence angle αi and the HFOV Ψ1/2 are represented in Figure 3.12.

Pi =Poi

d2iS(αi)Π(

αi

2Ψ1/2

) (3.7)

The half field-of-view Ψ1/2 of the PDs can be adjusted as needed. This can be achieved,for instance, by placing the PD inside an opaque tube, adjusting its length. The field-of-view value will be a trade-off between a wide angle for covering a significant area, avoidingblind spots, and a narrow angle for providing spatial discrimination. If we have 20 PDs wecan’t have a field-of-view of 10◦ since it will result on many blind spots because the areaused wasn’t totally covered.

Figure 3.12: Incidence angle and field-of-view.

The system modelling was based on Homogeneous Transformation Matrices (HTM ).HTM can be used as a mathematical tool to represent position and orientation, as well astranslation and rotation movements [31]. It is possible to use HTM to represent severalsuccessive movement, by multiplying them. This makes HTM a flexible tool for handlingposition and movement in robotics. In this work, this concept is extended to include therepresentation of optical components (emitter and receiver), allowing to compute the be-haviour of these components given their location and orientation in space. Each componenthas its own HTM to represent it in space. The functions used to create the entities arethe Rotx, Rotx, Rotx, TR, CreateSensors, IntensitySens and LightIntV 2 described inannex A, table A.1.

21

Page 42: VLC based Position Estimation for Robotic Navigation

As an example, (3.8) is the HTM representing the translation movement from the originto the point (x, y, z) and (3.9) represents a rotation of θ around the z axis in the origin.

T =

1 0 0 x0 1 0 y0 0 1 z0 0 0 1

(3.8)

R =

cos(θ) − sin(θ) 0 0sin(θ) cos(θ) 0 0

0 0 1 00 0 0 1

(3.9)

To obtain results using the simulation environment a VLP sensor based on photo-diodeswas used. These are regularly distributed on a hemispherical dome with their orientationvector normal to the dome surface. The distribution is based on meridians and parallels.Angle wise, meridians and parallels are equally spaced. In the case of parallels, the lowestparallel may lay on the equator or it can be placed in a more elevated line. There is no PDin the pole, making the two opposite PDs in the highest parallel to be at the same angulardistance as the other PDs in the same meridian. As such, the VLP sensor is defined bythree parameters: Nm, the number of meridians, Np, the number of parallels and φ(0), theoffset of the lowest parallel line. Figure 3.13 depicts a VLP sensor with Nm = 8, Np = 3and φ(0) = 0.

Figure 3.13: A VLP sensor with 8 meridians and 3 parallels.

Based on this distribution procedure, the position and orientation of the different PDsin relation to the center of the dome are easily defined by a distance, R, the radius of thedome, and two angles, θi and φi, the azimuth and the elevation angles, respectively. Forthe sake of simplicity, and without loss of generality, the reference frame of the dome isdefined such that the base is centred in the xOy plane with the Z axis going toward thepole.

The values of θi and φi are given by (3.10), where j = 0..Nm and i = 0..Np.

[ θj φi ]T =

[2jπ

Nm

φ(0) +i(π − 2φ(0))

2Np − 1

]T(3.10)

22

Page 43: VLC based Position Estimation for Robotic Navigation

Assuming that a PD is initially at the origin of the reference frame and aligned withit (with its sensibility diagram aligned with the zz axis), the orientation corresponds to arotation θ around the zz axis, followed by a rotation ψ = π/2 − φ around the rotated yyaxis. Finally, in order to put the PD in the surface of the dome, a translation R along therotated Z axis is required. The complete transformation is given by the multiplication ofthree HTMs, as given in (3.11)

Hi,j = Ti,jRYi,jR

Zi,j (3.11)

where RZi,j, R

Yi,j and Ti,j are, respectively, given by (3.12), (3.13) and (3.14).

RZi,j =

cos(θ) − sin(θ) 0 0sin(θ) cos(θ) 0 0

0 0 1 00 0 0 1

(3.12)

RYi,j =

cos(ψ) 0 sin(ψ) 0

0 1 0 0− sin(ψ) 0 cos(ψ) 0

0 0 0 1

(3.13)

T =

1 0 0 00 1 0 00 0 1 R0 0 0 1

(3.14)

For simplicity, light sources were modelled as a single point and with an omnidirectionalradiation pattern. As such, the light source pose was fully determined by its position inspace. The only geometrical parameters required to compute the received light intensityare the distance from the light source to the receiver and the incidence angle at the receiver.

The sensitivity function is presented in (3.15), where α ∈ [0, π/2]. The function has amaximum of 1 at α = 0, which means that the sensor gets the maximum intensity whenthe light source is on the PD axis [32]. The intensity also decreases with the square of thedistance, as shown in (3.7).

S(α) = cosm(α) (3.15)

The parameter m is the Lambert’s mode number that represents the directivity of thelight source and it is related to Φ, the incidence angle at half-power, by (3.16) [32].

m =− ln(2)

ln(cos(Φ))(3.16)

The computation of the light intensity received by each sensor is based on the HTMof the sensor, the position of the light source and the sensor’s Ψ1/2 (field-of-view). Thisintensity will be used to estimate the robot position.

23

Page 44: VLC based Position Estimation for Robotic Navigation

3.2.2 Adapt the algorithm

The positioning algorithm used considers the outputs from the PDs and estimates thesensor location, based on the light intensity received by each PD, light location and theorientation of the sensor (Figure 3.11). The algorithm assumes that the sensor is placedin the xOy plane (z = 0) and the light source is located in a horizontal plane at a fixedheight h (z = h). The algorithm is the same as the previous section but as the sensor hasmultiple meridians it was necessary to adjust the trigonometric circle discontinuity. Wemoved that discontinuity to the opposite meridian of the sensor with more light’s intensityreceived. This creates an image rotated θ degrees from the original position and where θcorresponds to the chosen meridian.

The readings of the PDs are stored in an array L of size Np×Nm where a value Li,j = 0indicates that the PD in the i-th parallel and j-th meridian is not detecting light (thelight source is outside the PD’s field-of-view). To compute the azimuth θ, the algorithmsearches for the best line is to compute the azimuth estimate. For this purpose, there aretwo possible options: the line with the largest value for the sum of its elements or the linewith the most non-zero elements. The azimuth estimate θ is given by (3.17), where θjcorresponds to the azimuth angle in column j.

θ =

∑Nmj=1 Lis,j · θj∑Nm

j=1 Lis,j

(3.17)

To compute the elevation, the algorithm now scans the L array columns searching forthe column js where the sum of elements is maximal. This yields an estimation of anelevation angle φ given by (3.18), where φi is the elevation angle corresponding to line i.

φ =

∑Npi=1 Li,js · φi∑Np

i=1 Li,js

(3.18)

The estimated φ is the elevation angle as detected by the PDs located in the chosenmeridian, having an azimuth equal to θjs . This is not necessarily the elevation of the sourcelight, but rather corresponds to the projection of the light source position in the verticalplane that contains the PDs used in the computation. Defining γ = θjs − θ, the estimation

of the elevation angle φ is now given by (3.19).

φ = arctan[tan(φ) · cos(γ)] (3.19)

The distance d on the xOy plane from the sensor to the light source is given by (3.20),the source light position estimate (xl, yl, zl) is given by (3.21) and the error was obtainusing equation 3.6.

d =h

tan(φ)(3.20)

(xl, yl, zl) = (d · cos(θ), d · sin(θ), h) (3.21)

24

Page 45: VLC based Position Estimation for Robotic Navigation

To present results about this validation framework for VLP, several tests were performedto validate the positioning and the modelling modules.

3.2.3 Test Procedure

The estimation of the sensor’s position is straightforward once the light source positionis known. The positioning algorithm considers that the sensor frame is placed at the originand thus the result will be the location of the light source relative to the sensor. To performthe tests the sensor was positioned at the origin and the light source was placed at a heightz = 5 and in different (x, y) locations, in a regular 2-dimensional grid from (−10,−10)to (10, 10). At each point, the sensor position was estimated using the algorithm aboveand the positioning error, defined as the euclidean distance from the actual position to theestimated position was computed. The error map uses the most non-zero elements andthe largest value for the sum of its elements, allowing the comparison between them. Toestimate the light source position (and, by inversion, the sensor position), the algorithmrelies on the estimation of the light source azimuth θ and elevation φ relative to the sensor.The final test was to compare the average and the maximum error associated to threedifferent configurations, using 8 meridians for all configurations and 4, 5 and 6 parallels,when the FOV is varying as we can see in the results. Considering that the number ofsensors required is given by Np times Nm, the higher the number of parallels and meridiansused the more sensors are needed.

3.2.4 Results analysis

The sensor was used to perform a set of experiments in order to validate both thesimple positioning algorithm described before and the viability of the platform to be usedas a simulation and validation framework.

Figure 3.14 presents the error map for a sensor with Np = 8 and Nm = 5 and estimatingthe azimuth using the line with the largest value for the sum of its elements. A clearimprovement is noticed when the line with the most non-zero elements is used (Figure 3.15).

As shown in table 3.3, a ten-fold reduction in both average and maximum error isobserved when changing from selecting the line with the largest sum to the line with themost non-zero elements, when estimating the azimuth.

Table 3.3: Comparison of methods for choosing the best line for azimuth computation.Error Largest sum Most non-zeroMax 1.89 0.19Avg. 0.62 0.05

Figure 3.16 presents the results for the maximum and average value of the error, varyingΨ1/2, from 45◦ to 90◦, for three different sensor configurations: (Nm, Np) = (8, 4), (8, 5)and (8, 6). The best results are obtained for Ψ1/2 = 78◦, with an average error of 0.05 anda maximum error of 0.1941.

25

Page 46: VLC based Position Estimation for Robotic Navigation

Error plot: z light=5, N Merid=8, N Paral=5, FOV=78º

x-10 -5 0 5 10

y

-10

-8

-6

-4

-2

0

2

4

6

8

10

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Figure 3.14: Error map, using largest sum.

These results show the viability of implementing VLP using a simple sensor, basedon a set of photo-diodes and a simple localization algorithm, yielding a low-cost solutionfor visible light positioning. The inclusion of real world conditions in the simulation sce-nario, such as noise and other error sources in the sensor readings, different photo-detectorcharacteristics, sensor configuration and photo-detector misalignment wasn’t considered.Thereafter in the next section we present the simulator environment validation.

3.3 Simulation Environment Validation

Comparing the results from a real system with a simulated one brings the possibilityto understand what is needed to change, or improve, on the simulator to obtain a responseclose to the real one. The simulation environment for VLP presented in the previoussection isn’t different. In this section two types of validation are presented, the intensityand position validation. The intensity validation is based on recreating the test performedby [17], moving the prototype in straight line and registering the intensity received in eachPD. In this case using the simulator and the position validation results in comparing theposition estimation in 3.1, with the simulator results.

3.3.1 Test Procedure

To compare the intensities it is necessary to recreate the experiment done by [17]. Heplaced the prototype under the light source (origin) and facing −x, then he moved theprototype over −x until the sensor at 90◦ stop receiving signal. Finally, he moved theprototype over the same line but to the positive side. Using the information from the testdescribed we were able to calculate the FOV of each sensor in the prototype (figure 3.18),in this case the height is 2.18 m. For the FOV, we took the two zeros of each PD function

26

Page 47: VLC based Position Estimation for Robotic Navigation

Error plot: z light=5, N Merid=8, N Paral=5, FOV=78º

x-10 -5 0 5 10

y

-10

-8

-6

-4

-2

0

2

4

6

8

10

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Figure 3.15: Error map, using most non-zero values.

(xni ) and used them as width of a right angle triangle with a height of 2.18 m (figure 3.17).Each PD had two different angles (α1 and α2) corresponding to the position of the zeros.

αni = tan−1

xnih

(3.22)

The equation 3.22 was used to obtain the angles αi from each PD, where i is the indexof angles per PD (two in our case), αn

1 represents the angle of xn1 and αn2 the angle of

xn2 with a height of h (Figure 3.17). To obtain the correct elevation of each PD we useequation 3.23, to obtain the FOV we just subtract the αn

2 and αn1 and divided for two as

prescribed in 3.24.

Elevation = 90−(αn1 + αn

2

2

)(3.23)

FOV =αn1 − αn

2

2(3.24)

The table 3.4 include the x1 and x2 from each PD and the values obtained using theequations 3.22, 3.23, 3.24 and figure 3.18. With that values we can configure the simulatorwith a FOV close to the obtained.

Table 3.4: Values used to get the real FOV and elevation angle of the prototypen ElevNom xn1 xn2 αn

1 αn2 RealElevation (◦) FOV (◦)

0 90 -0.5 0.3 -12.9178 7.8355 92.541 10.371 79 -0.05 0.7 -1.3138 17.8018 81.756 9.562 68.5 0.4 1.15 10.3973 27.8127 70.895 8.713 58.5 0.85 1.7 21.3012 37.9477 60.376 8.324 48 1.25 2.3 29.8297 46.5343 51.818 8.35

27

Page 48: VLC based Position Estimation for Robotic Navigation

FOV45 50 55 60 65 70 75 80 85 90

Max

and

Mea

n er

ror

0

0.5

1

1.5

2

2.5

max(8,4)mean(8,4)max(8,5)mean(8,5)max(8,6)mean(8,6)

Figure 3.16: Maximum and average error as a function of FOV.

With the information from the table 3.4, we were able to configure the simulator andcompare the intensities received performing a similar experiment. The final part consistedon estimating the position in several test points (similar to the experiment described insection 3.1) using the simulator with similar characteristics to the prototype. For that,we started by saving the elevation and azimuth to compare with physical values and thenthe position was estimated. The number of sensors was 8x36 because the prototype iscomposed by 8 layers of sensors and in the experiment described before it was rotatedfrom 0◦ to 90◦, 10 times. This gave us a circumference with 36 meridians and 8 parallels.We set the light source on (0, 0, 2.75) (m) and the robot in the same sampling points as thereal experience. Then we started by obtaining the elevation angle and the azimuth with aFOV of 10◦. We analysed the same angles with a lower FOV (of 5◦) to test its importanceand relevance.

The results from these experiments were combined with those from the section 3.1 andare represented on the figure 3.21.

3.3.2 Results analysis

The systems comparison helped us to better understand the system, machine or algo-rithm failures. Allowing us to detect bugs, to verify if the performance is similar between

28

Page 49: VLC based Position Estimation for Robotic Navigation

Figure 3.17: Right angle triangle made with information provided by the sensors.

tests and when comparing real and simulated systems, it allows us to know if the simula-tion is close to the real scenario. In this case, we will present the results from the differenttests performed to validate the simulator.

In addition, the results from the intensity validation using the simulator are presentedin figure 3.19, with the prototype in figure 3.18 and a results overlap (figure 3.20) for abetter comparison.

With the information provided from table 3.5, it is possible to notice that the simulatedvalues (FOV = 10) are closer to the physical values and with that information we calculatethe final position (real and simulated) and place them together with the results obtainedfrom the experiment with the prototype (figure 3.21).

Comparing the intensities results, the simulator present an identical behaviour to thereal test (figure 3.20), even though the results from figure 3.19 have a systematic deviationaround 2◦. Bearing in mind that the PDs in the prototype aren’t in the correct elevation(table 3.4), the observed deviation should not be a problem of the simulator, but ratherthe result of incorrect placement of the PDs in the structure. The deviation was calculatedby obtaining the elevation angle from the position of the PD maximum both simulatedand physical, and then subtracting them.

As mentioned before, the real FOV is between 8◦ and 10◦. This difference can signif-icantly affect the PD’s receiving signal and because of that the sensors don’t contributein the same way for the position estimation. It is important to refer that in this momentthe simulation environment doesn’t have noise included. Nevertheless the results from theelevation angle are very similar, compared with the simulator when the FOV is 10◦ (table3.5). We noticed that some positions are better in the way that, depending on the sensorand light position, if the sensor was facing the light the error was close to zero. Includingthe FOV of 5◦ proved its importance for the positioning estimation. Without enough fieldof view it isn’t possible to obtain a good estimation. As mentioned before, this version ofthe simulation environment did not consider noise, even with this limitation the simulatorpresent satisfactory results.

29

Page 50: VLC based Position Estimation for Robotic Navigation

Figure 3.18: Signal received by the Photo-detectors ([17]).

3.4 Noise Model Implementation

The noise implementation in a simulator brings the opportunity to turn the tests morereal. Thereby it is possible to do experiments close to real scenarios, helping to build futureprototypes. That way it is possible to verify the influence of noise in the used algorithm, theamount of noise the system can handle, how the noise influences the different configurationof sensors and the error associated to it.

3.4.1 Noise in the PD

The white LED can be represented mathematically. For our purpose its representationstarts by defining its luminous flux Φv, given by (3.25) where Km = 683lm/W , is aconstant establishing the relationship between the radiometric unit and photometric unitnamed luminous efficacy. This is the maximum amount of visible light that a light sourcecan produce [33]. Vm is the photopic curve and V S represents the visible spectrum, S(λ)is the spectral power distribution of the white LED [34].

Φv = Km

∫V S

S(λ)Vm(λ)dλ (3.25)

A simple approach to model spectral power distribution of white LEDs is to use Gaus-sian distributions centred on the device maximum response. Following this approach theLED’s SPD can be approximated by 3.26, where Si is the spectral power of the deviceat the peak wavelength λi and σi represents the power spreading around λi. wi is the

30

Page 51: VLC based Position Estimation for Robotic Navigation

Figure 3.19: Signal received by the PDs in the simulator.

weighting factor describing the additive proportions of each peak wavelength [34].

S(λ) =∑i

wiSie−(λ−λi√

2σi

)2

(3.26)

Given the SPD, to obtain the current in the PD Id, produced by the white LED, itis necessary its responsivity R(λ) and the effective area (characteristics given by the PDproducer). So the current Id is given by 3.27.

Id = Aef

∫S(λ)R(λ)dλ (3.27)

Finally, to calculate the noise normal distribution σ2 represented by 3.28, it’s necessarythe electron charge q, the current Id and the bandwidth B.

σ2 = 2qIdB (3.28)

Thereby the current I in a PD (Figure 3.22) is given by 3.29.

I ∼ N (Id, σ2) (3.29)

This is the way to represent the noise in the PDs depending on the amount of lightthat target’s it and the datasheet information. However, we decide to introduce noise in

31

Page 52: VLC based Position Estimation for Robotic Navigation

Figure 3.20: Both real and simulated results overlap.

the system using the information described above and the Signal-to-Noise Ratio (SNR).This implementation allowed us to easily manage the amount of noise in the system and itsinfluence on the position estimation. As mentioned before, the light source was modelledas a single point and with an omnidirectional radiation pattern, so in this case the currentin the PD will be related with its illuminated area.

3.4.2 Implementation

Knowing that the PD’s noise has normal distribution and using the knowledge fromthe previous section, a possible way to implement noise in the simulator is using the SNR.In this case, the SNR equation 3.30 can be obtained using 3.28 and 3.27. Using a randomnormal distribution centred in Id and a variance σ it is easy to obtain random intensitiesbased on the SNR, Id and σ.

SNR =σ2

I2d=

2qB

Id(3.30)

So, given the white LED standard luminous flux (aprox. 500 lux) it is possible to pro-ceed to the next level, vary the SNR and observe the response of the simulator estimatingthe position.

3.4.3 Test Procedure

In the tests, we decide to vary the SNR from 1 to 150 dB in three different structures,set with Nparallels x Nmeridians (6x24, 8x32 and 10x40). This structures have a relation1:4, because we were using a hemispherical dome (Figure 3.23) we need four times moremeridians than parallels to cover it. The light position was set in (2.05, 1.99, 5) and four

32

Page 53: VLC based Position Estimation for Robotic Navigation

Table 3.5: Comparison between the elevation and azimuth from experience and the vali-dation framework

PositionPhysical Values Simulated FOV = 10 Simulated FOV=5

Elevation Azimuth Elevation Azimuth Elevation Azimuth(0;0) 90 - 90 - 90 -(0;0.5) 79 0 79.46 0 79.46 0(0;1.0) 68.35 0 69.04 0 74.11 0(0.5;0) 79.1 90 79.46 90 79.46 90(0.5;0.5) 79.07 48.93 78.41 45 74.06 45(0.5;1.0) 68.35 28.83 68.66 30 63.52 30(1.0;0) 68.1 89.42 69.04 90 74.11 90(1.0;0.5) 68.35 68.84 68.66 60 63.52 60(1.0;1.0) 62.01 46.26 62.73 45 63.44 45(1.5;0) 59.9 90 58.81 90 63.52 90(1.5;0.5) 57.79 74.23 57.91 73.89 63.52 70(1.5;1.0) 57.79 59.88 57.97 60 52.94 60(0.5;-0.5) 78.55 45.55 78.41 45 74.06 45(0.5;-1.0) 68.35 36.4 68.66 30 63.52 30(1.0;-0.5) 68 64.71 68.66 60 63.52 60(1.5;-0.5) 57.79 74.32 57.91 73.89 63.52 70

different field of view (40◦, 30◦ 20◦ and 10◦) were established. We tried to choose a lightposition where the alignment with the sensor was cancelled, i.e, when the elevation anglebetween the robot and the light corresponded to a PD position. If this problem wasn’tcancelled, the error associated was significantly reduced, giving a false result.

For each value of SNR we obtained 1000 different values of intensity based on randomnormal distribution. With that information we estimated the same position several times.Thereafter, we calculated the error using 3.6 where x and y were zero, because the robotwas placed at the origin, x′ and y′ were the estimated position. Finally we calculate themaximum, the average and the minimum value of error for each situation.

3.4.4 Results analysis

In this section we present the results from the experiments described previously. Theresults were used to compare different configurations and their response to noise.

The figures 3.24, 3.25 and 3.26 show the error in three different configurations (6x24,8x32 and 10x40), where the dashed line represents the maximum and minimum error andcontinuous line the average error. It is interesting to notice that Figure 3.25 present asmaller error comparing to the others and for all FOVs. This result can indicate thatsome configurations are more appropriated than others. Then we fixed the FOV value andcompare the response of each structure to that field of view.

It is notorious that 8x32 has a better performance than others configurations. The

33

Page 54: VLC based Position Estimation for Robotic Navigation

Figure 3.21: Map of points with the results from real and simulated scenario.

smaller error is obtained when the FOV is 40◦ (Figure 3.30) compared with others (Figure3.27, Figure 3.28 and Figure 3.29), concluding that with more information in several PDsthe error is smaller. The SNR evaluation showed the limit of noise influence (between 10dB and 60 dB) in the position estimation, after that value the error was associated to thealgorithm.

Table 3.6: The average error of all configuration with the different FOVsFOV Conf 6x24 8x32 10x4040 0.0534 0.0056 0.040330 0.1148 0.0084 0.129920 0.7801 0.0445 0.180310 0.7801 0.0489 0.4052

3.5 Chapter Remarks

Using the prototype developed in [17] we were able to estimate its position using a simplealgorithm and simple trigonometry. The prototype presented several limitations, such as,

34

Page 55: VLC based Position Estimation for Robotic Navigation

Figure 3.22: Photo-Detector current.

Figure 3.23: Hemispherical dome with meridians and parallels represented.

the number of PDs, their FOV and its lack of flexibility. Regarding this, come the ideato create a simulation environment for VLP to overcome this limitation. This simulatorallowed us to test different configurations with the same structure, varying the FOV, thenumber of sensors and their elevation angle. To prove the viability of the simulationenvironment, we compared its results with the result from the first experiment and withresults provided by [17]. Even though we detected differences the position was very close toeach other. Finally, a noise model was implemented on the simulator. We only consideredthe noise in the PDs and since they present normal distribution it could be implementedbased on its current, variance and SNR. In case of changing the sensor, it’s necessary toverify if this implementation is still applicable. At this moment, it is possible to understandthe importance of the FOV and the number of sensors to estimate the position using thisapproach and the necessity to verify if exist a better configuration for this scenario.

35

Page 56: VLC based Position Estimation for Robotic Navigation

Figure 3.24: The error of a structure 6x24 with different FOV varying the SNR.

Figure 3.25: The error of a structure 8x32 with different FOV varying the SNR.

36

Page 57: VLC based Position Estimation for Robotic Navigation

Figure 3.26: The error of a structure 10x40 with different FOV varying the SNR.

Figure 3.27: The three different configuration with a FOV of 10◦.

37

Page 58: VLC based Position Estimation for Robotic Navigation

Figure 3.28: The three different configuration with a FOV of 20◦.

Figure 3.29: The three different configuration with a FOV of 30◦.

38

Page 59: VLC based Position Estimation for Robotic Navigation

Figure 3.30: The three different configuration with a FOV of 40◦.

39

Page 60: VLC based Position Estimation for Robotic Navigation

40

Page 61: VLC based Position Estimation for Robotic Navigation

Chapter 4

Conclusion and Future Work

4.1 Conclusion

The initial aim of this work was to estimate the position of a robot platform usingVisible Light Communication. We verified that was possible to estimate the position usingthe prototype even with its principal limitation: the lack of flexibility. Thereby, the numberof sensors and their FOV could not be changed. Even with this constrains we achieved anaverage error of 10 cm. The results show that this structure can be used for VLP.

To overcome the prototype’s limitation a simulation environment was created. Withit, we used the homogeneous transformation matrix (HTM) to represent the environmententities and the mathematical representation of the sensor (photo-diode) and the lightbulb (LED). This allowed the creation of a flexible environment where it is easy to changethe number of sensors, their elevation angle and FOV, as needed. The results show theviability of implementing VLP using a simple sensor based on a set of photo-diodes yieldinga low-cost solution for visible light positioning.

To validate the simulation, two tests were performed. The first one was to re-create anexperiment done by [17], obtaining a sensor response over a line similar to the real one.The second was to estimate the robot’s location as in section 3.1. The estimated positionswere close to the real experiment, even without noise.

A noise model was implemented based on the PD current, its noise variance and theSignal-to-Noise Ratio (SNR). The results show an error of few centimeters with an SNRof 30 dB, reinforcing the viability of implementing VLP using a simple sensor, based ona set of photo-diodes distributed over a hemispherical dome and how error changes withthe amount of noise. After testing the error in 3 different configurations (6x24, 8x32 and10x40), when subject to noise we noticed that 8x32 presents a smaller error comparedto the others. Also, we analysed how the field of view influenced this configurations bychanging it to 10◦, 20◦, 30◦ and 40◦ in each. We concluded that the photo-diodes fieldof view (FOV) is important when the position is estimated. The sensors FOV should bebig enough to intercept others in order to prevent blind spots but not too big since itwould lead to errors because all sensors would receive signal. We also concluded that in

41

Page 62: VLC based Position Estimation for Robotic Navigation

the studied configurations, the 40◦ FOV presents a lower error. This can be an interestingstudy subject for future work.

4.2 Future Work

Regarding the results, we present some ideas that can be developed in future work. Bybuilding a simple structure where the FOV and the number of PDs can be easily changedor with a single PD and a two step-motor to give more freedom to the PD to search forthe light, would facilitate the experiment. Also, it would be important to focus on thedifferent configurations and why one presents better results or configure new sensors in thesimulator and verify which one is more appropriate.

42

Page 63: VLC based Position Estimation for Robotic Navigation

Bibliography

[1] Jean Armstrong, Y. Sekercioglu Ahmet, and Adrian Neild. Visible light position-ing: A roadmap for international standardization. IEEE Communications Magazine,51(12):68–73, 2013.

[2] Ziyan Jia. A Visible Light Communication Based Hybrid Positioning Method forWireless. IEEE Computer Society, 7(12):1367–1370, 2012.

[3] Se-hoon Yang and Sang-kook Han. VLC Based Indoor Positioning using Single-Txand Rotatable Single-Rx. 2013.

[4] Penghua Lou, Hongming Zhang, Xie Zhang, Minyu Yao, and Zhengyuan Xu. Funda-mental analysis for indoor visible light positioning system. 2012 1st IEEE Interna-tional Conference on Communications in China Workshops, ICCC 2012, pages 59–63,2012.

[5] Wang Chunyue, Wang Lang, C H I Xuefen, L I U Shuangxing, S H I Wenxiao, andDeng Jing. The Research of Indoor Positioning Based on Visible Light Communica-tion. (August):85–92, 2015.

[6] Nuno Lourenco. Communication systems using visible light: Emitter/receiver. Mas-ter’s thesis, University of Aveiro, 2009.

[7] Madoka Nakajima and Shinichiro Haruyama. New indoor navigation system for visu-ally impaired people using visible light communication. EURASIP Journal on WirelessCommunications and Networking, 2013(1):37, 2013.

[8] Weizhi Zhang, M. I. Sakib Chowdhury, and Mohsen Kavehrad. Asynchronous in-door positioning system based on visible light communications. Optical Engineering,53(4):045105, 2014.

[9] Mike Dempsey. Optical indoor positioning systems. Biomedical Instrumentation Tech-nology, 20(3):195–200, 2004.

[10] Jose N Vieira, Sergio I Lopes, Carlos AC Bastos, and Pedro N Fonseca. Ultrasoundsensor array for robust location. Multi-Agent Robotic Systems, Proceedings, pages84–93, 2007.

43

Page 64: VLC based Position Estimation for Robotic Navigation

[11] Gil Lopes, Andreia Albernaz, Helder Ribeiro, Fernando Ribeiro, and MS Martins.Tracking sound source localization for a home robot application. 2016.

[12] A. Lindo, E. Garcia, J. Urena, M. del Carmen Perez, and A. Hernandez. Multibandwaveform design for an ultrasonic indoor positioning system. IEEE Sensors Journal,15(12):7190–7199, Dec 2015.

[13] Lubin Zeng, Dominic C O’Brien, Hoa Minh, Grahame E Faulkner, Kyungwoo Lee,Daekwang Jung, YunJe Oh, and Eun Tae Won. High data rate multiple input multipleoutput (mimo) optical wireless communications using white led lighting. Selected Areasin Communications, IEEE Journal on, 27(9):1654–1662, 2009.

[14] Liang Yin, Xiping Wu, and Harald Haas. Indoor visible light positioning with anglediversity transmitter. In Vehicular Technology Conference (VTC Fall), 2015 IEEE82nd, pages 1–5. IEEE, 2015.

[15] Hui Liu, Houshang Darabi, Pat Banerjee, and Jing Liu. Survey of wireless indoor posi-tioning techniques and systems. Systems, Man, and Cybernetics, Part C: Applicationsand Reviews, IEEE Transactions on, 37(6):1067–1080, 2007.

[16] Trong-Hop Do and Myungsik Yoo. An in-depth survey of visible light communicationbased positioning systems. Sensors, 16(5):678, 2016.

[17] Filipe Duarte. Visible light indoor positioning for mobile robots. Master’s thesis,University of Aveiro, 2015.

[18] OSI Optoelectronics. Photodiode Characteristics and Applications, year =2016, url = http://www.osioptoelectronics.com/application-notes/AN-Photodiode-Parameters-and-Characteristics.pdf, urldate = 07-11-2016.

[19] Steven De Lausnay, Lieven De Strycker, Jean Pierre Goemaere, Nobby Stevens, andBart Nauwelaers. Optical CDMA codes for an indoor localization system using VLC.2014 3rd International Workshop in Optical Wireless Communications, IWOW 2014,pages 50–54, 2014.

[20] Liqun Li, Pan Hu, Chunyi Peng, Guobin Shen, and Feng Zhao. Epsilon: A Visi-ble Light Based Positioning System. 11th USENIX Symposium on Network SystemsDesign and Implementation, (1):1–13, 2014.

[21] Steven De Lausnay, Lieven De Strycker, Jean-Pierre Goemaere, Nobby Stevens, andBart Nauwelaers. A visible light positioning system using frequency division mul-tiple access with square waves. In Signal Processing and Communication Systems(ICSPCS), 2015 9th International Conference on, pages 1–7. IEEE, 2015.

[22] Marco Forzati. Phase modulation techniques for on-off keying transmission. In 20079th International Conference on Transparent Optical Networks, volume 1, pages 24–29.IEEE, 2007.

44

Page 65: VLC based Position Estimation for Robotic Navigation

[23] Jon Hamkins. Pulse position modulation. Handbook of Computer Networks: KeyConcepts, Data Transmission, and Digital and Optical Networks, Volume 1, pages492–508, 2007.

[24] Hyun-Seung Kim, Deok-Rae Kim, Se-Hoon Yang, Yong-Hwan Son, and Sang-KookHan. An indoor visible light communication positioning system using a rf carrierallocation technique. Lightwave Technology, Journal of, 31(1):134–144, 2013.

[25] Kerim Fouli and Martin Maier. Ocdma and optical coding: Principles, applications,and challenges [topics in optical communications]. IEEE Communications Magazine,8(45):27–34, 2007.

[26] Robert M Gagliardi. Frequency-division multiple access. In Satellite Communications,pages 215–250. Springer, 1991.

[27] Se-Hoon Yang, Hyun-Seung Kim, Yong-Hwan Son, and Sang-Kook Han. Three-dimensional visible light indoor localization using aoa and rss with multiple opticalreceivers. Journal of Lightwave Technology, 32(14):2480–2485, 2014.

[28] Divya Ganti, Weizhi Zhang, and Mohsen Kavehrad. VLC-based Indoor PositioningSystem with Tracking Capability Using Kalman and Particle Filters. 2014 IEEEInternational Conference on Consumer Electronics (ICCE), pages 476–477, 2014.

[29] Ye-sheng Kuo, Pat Pannuto, Ko-jen Hsiao, and Prabal Dutta. Luxapose: IndoorPositioning with Mobile Phones and Visible Light. Mobicom’14, pages 299–301, 2014.

[30] Miguel Vieira, Rui Costa, Artur Pereira, and Pedro Fonseca. A Validation Frameworkfor Visible Light Positioning in Mobile Robotics. IEEE International Conference onAutonomous Robot Systems and Competitions, 2016.

[31] Richard P Paul. Robot manipulators: mathematics, programming, and control. MITPress, 1981.

[32] Zabih Ghassemlooy, Wasiu Popoola, and Sujan Rajbhandari. Optical wireless com-munications: system and channel modelling with Matlab R©. CRC Press, 2012.

[33] Erik Reinhard. Color Imaging: fundamentals and applications. AK Peter, Ltd., 208.

[34] Manuel Francisco Monteiro Arderius de Faria. Comunicacoes oticas transdermicas.Master’s thesis, Instituto Superior Tcnico - Universidade de Lisboa, 2015.

45

Page 66: VLC based Position Estimation for Robotic Navigation

Appendix A

Functions and Scripts

A.1 Tables of functions and scripts used

Table A.1: Table of functions used for all tests preformedFunctions Description Chapter

CreateSensorsCreate the sensors in a hemispherical domeplaced on the robot, described by ahomogeneous matrix.

3.2 and 3.4

DeterPosV4Algorithm used to estimate the robot’s position based onthe intensity received, the Light position and its orientation.It is based on the largest sum of its elements.

3.2 and 3.4

DeterPosV5Algorithm used to estimate the robot’s position based on theintensity received, the Light position and its orientation.It is based on the most non zero values.

3.2 and 3.4

DeterPosV51Similar to DeterPosV5, the difference is that version 51accepts a matrix of intensities or a matrix of light positions.

3.2 and 3.4

IntensitySens Light Intensities detected in the sensors created. 3.2 and 3.4

IntensitySensVSimilar to IntensitySensV, this function accepts a vector oflight positions or n samples.

3.2 and 3.4

LightIntV2Light Intensity in a Photo-detector based on his FOVand the angle with the light bulb.

3.2 and 3.4

LightMatrixGenerate a Matrix of light bulbs with different positionsequaly sprade.

3.2 and 3.4

Rotx Rotation according to X axis using homageneous matrix 3.2 and 3.4Roty Rotation according to Y axis using homageneous matrix 3.2 and 3.4Rotz Rotation according to Z axis using homageneous matrix 3.2 and 3.4TR 3D translation using homogeneous matrix. 3.2 and 3.4

mediaPonderadaCalculate a weighted average based only in relevantinformation.

3.1

luxflux Used to obtain the luminous flux of a LED. 3.4pdnoise Calculates the noise in the PD. 3.4photopic Mathematical representation of the Photopic curve. 3.4spd LED Spectral Power Distribution, configured for a white LED. 3.4

46

Page 67: VLC based Position Estimation for Robotic Navigation

Table A.2: Table of scripts used for all tests preformedScripts Description Chapter

ErrorScriptCalculates the error from the estimated positions andthe real ones.

3.1

coorFinal Quad1 Pos15 00

This script is an example. All of them follow this form:coorFinal Quad# Azi## Pos## ##.With the information received by the Photo-detectorcalculates the azimuth and elevation angle andestimates the position.

3.1

proceValoresPicReceives a Matrix from PIC32 and processes the valuesgiving the intensity received by each Photo-detectorfrom the prototype.

3.1

ManualTestAllows testing different sensor configurations changingsome arguments of the functions used andcalculates the position.

3.2

comparacaoValoresPlots the points from different experiences helping tounderstand and compare them.

3.3

MapTest 06 2016Estimates the positions with different values of noise andplot them.

3.4

SNRxErr FinalTestsStudies the behaviour of the algorithm using differentvalues of noise based on SNR.

3.4

SNRxErr FinalTests PlotPart Plots the results from SNRxErr FinalTest 3.4SNRxError Similar to the previous ones. 3.4WhiteLED Simulates the behaviour of a white LED. 3.4

47

Page 68: VLC based Position Estimation for Robotic Navigation

48

Page 69: VLC based Position Estimation for Robotic Navigation

Appendix B

Data

B.1 Data received from the first and fourth quadrant

49

Page 70: VLC based Position Estimation for Robotic Navigation

Table B.1: Signal received in each PD in first quadrantPosition Azimuth Signal Amplitude in the different PDs

90◦ 79◦ 68◦ 58◦ 47.5◦ 37.5◦ 27.5◦ 18.5◦

(0;0) 0 38.5 2.0 2.0 2.0 2.5 2.0 2.5 2.010 38.5 2.0 2.0 2.5 2.5 2.0 2.5 2.020 38.5 2.0 2.0 2.0 2.5 2.0 2.5 2.530 38.5 2.0 2.5 2.0 2.5 2.0 2.5 2.040 38.5 2.0 2.0 2.0 2.5 2.0 2.5 2.045 38.5 2.0 2.0 2.0 2.5 2.0 2.5 2.050 43.0 2.5 2.5 4.0 2.0 2.0 2.0 2.560 43.0 2.5 2.5 4.0 2.0 2.0 2.0 2.070 43.0 2.5 2.5 4.0 2.0 2.0 2.0 2.080 42.5 2.5 4.0 4.0 2.0 2.5 2.0 2.090 43.0 2.5 2.5 4.0 2.0 2.0 2.0 2.0

(0;0.5) 0 6.5 44.5 4.0 4.0 4.0 4.0 4.0 2.510 6.0 44.0 4.0 4.0 3.5 4.0 4.0 4.020 6.5 47.5 4.0 4.0 4.0 4.0 4.0 4.030 6.5 46.0 4.0 4.0 4.0 4.0 4.5 4.040 5.0 25.5 4.0 4.0 4.0 4.0 4.0 4.045 5.0 10.0 4.0 4.0 4.0 4.0 2.0 4.050 5.0 6.0 4.0 4.0 4.0 4.0 2.0 4.060 4.5 5.0 4.0 3.5 4.0 4.0 2.5 4.070 3.0 4.0 4.0 3.5 4.0 4.0 4.0 4.080 3.0 3.5 4.0 3.0 3.0 4.0 3.0 4.090 3.5 3.0 4.0 4.0 3.0 3.0 4.5 3.0

(0;1.0) 0 4.5 4.0 42.5 4.0 4.0 4.0 4.0 4.010 5.0 4.0 39.0 4.0 3.5 4.0 2.5 3.520 6.0 4.5 14.5 4.5 4.5 3.5 3.5 3.530 5.0 3.5 4.0 4.0 3.0 3.0 4.0 3.040 5.0 4.0 4.0 4.0 3.0 4.0 3.0 4.045 5.0 4.0 4.0 4.0 3.0 4.0 3.0 4.050 5.0 4.0 4.0 4.0 3.0 4.0 3.0 4.060 5.0 4.0 4.0 4.0 3.0 4.0 3.0 4.070 5.0 4.0 4.0 4.0 3.0 4.0 3.0 4.080 5.0 4.0 4.0 4.0 3.0 4.0 3.0 4.090 5.0 4.0 4.0 4.0 3.0 4.0 3.0 4.0

(0.5;0) 0 3.5 4.0 2.0 4.0 2.5 3.0 2.0 2.010 3.5 4.0 2.0 4.0 2.5 3.0 2.0 2.020 3.5 4.0 2.0 4.0 2.5 3.0 2.0 2.030 3.5 4.0 2.0 4.0 2.5 3.0 2.0 2.040 3.5 4.0 2.0 4.0 2.5 3.0 2.0 2.045 3.5 4.0 2.0 4.0 2.5 3.0 2.0 2.050 3.5 4.0 2.0 4.0 2.5 3.0 2.0 2.060 3.5 16.0 2.0 4.0 2.0 2.0 2.0 2.070 3.0 26.5 2.5 4.0 2.0 2.5 2.5 2.080 3.0 32.5 3.0 4.0 2.0 2.0 2.0 2.090 5.0 29.0 2.0 2.0 2.5 2.0 2.5 2.0

50

Page 71: VLC based Position Estimation for Robotic Navigation

Table B.2: Signal received in each PD in first quadrantPosition Azimuth Signal Amplitude in the different PDs

90◦ 79◦ 68◦ 58◦ 47.5◦ 37.5◦ 27.5◦ 18.5◦

(0.5;0.5) 0 3.5 4.0 3.0 4.0 3.5 3.5 3.0 2.010 3.5 3.5 3.0 4.0 3.5 3.5 3.0 2.020 3.5 8.0 3.0 4.0 3.5 3.5 3.0 2.030 3.5 8.0 3.0 4.0 3.5 3.5 3.0 2.040 4.0 28.5 4.0 3.5 2.5 3.0 2.0 1.545 3.0 31.0 5.0 3.0 2.5 3.0 2.5 2.550 3.5 31.5 6.0 3.0 3.0 3.5 2.5 2.560 3.0 31.0 4.5 3.5 3.0 3.5 2.0 2.570 4.0 14.5 3.0 3.5 2.5 2.5 2.5 2.580 3.5 4.0 2.5 2.5 3.0 3.0 2.5 2.590 3.5 4.0 2.5 3.0 2.5 3.0 2.5 2.5

(0.5;1.0) 0 4.0 3.0 5.0 3.5 2.5 3.5 3.0 2.510 4.0 3.0 4.0 3.0 2.5 3.5 3.0 2.520 4.0 3.5 35.5 4.0 3.5 3.5 2.5 2.530 4.0 3.0 39.0 3.0 3.5 3.5 2.5 3.540 4.0 3.5 24.0 2.5 3.5 2.5 3.5 3.545 3.5 2.5 11.5 3.0 2.5 3.0 3.5 2.550 3.5 2.5 2.0 3.0 2.5 4.0 2.5 2.560 3.5 2.5 2.0 3.0 2.5 4.0 2.5 2.570 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.080 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.090 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.0

(1.0;0) 0 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.010 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.020 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.030 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.040 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.045 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.050 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.060 4.0 3.5 2.5 3.0 2.5 4.0 2.5 3.070 4.0 4.0 6.5 4.5 4.0 4.0 4.0 4.080 4.0 4.0 36.0 4.0 4.0 4.0 4.0 3.090 4.0 4.5 38.0 4.0 3.5 3.5 3.5 4.0

(1.0;0.5) 0 3.5 2.0 3.0 3.5 3.5 3.0 3.0 3.010 3.5 2.0 3.0 3.5 3.5 3.0 3.0 3.020 4.0 3.5 3.5 3.5 2.5 3.5 3.5 2.530 4.0 3.5 3.5 3.5 2.5 3.5 3.5 2.540 4.0 3.0 3.0 3.5 2.5 2.5 2.5 2.545 3.5 2.5 3.5 3.5 2.5 3.0 3.0 2.050 3.0 2.5 5.5 3.5 2.5 2.5 2.5 1.560 3.5 3.5 32.5 3.5 3.0 3.5 2.5 2.570 3.0 3.5 36.5 4.0 3.0 2.5 2.5 2.580 3.0 3.5 22.0 3.5 2.5 2.5 2.5 2.590 2.5 3.0 3.5 3.5 3.5 2.5 3.5 1.5

51

Page 72: VLC based Position Estimation for Robotic Navigation

Table B.3: Signal received in each PD in first quadrantPosition Azimuth Signal Amplitude in the different PDs

90◦ 79◦ 68◦ 58◦ 47.5◦ 37.5◦ 27.5◦ 18.5◦

(1.0;1.0) 0 3.5 2.5 3.0 3.5 2.5 3.0 2.5 2.010 3.5 2.0 3.0 3.5 3.5 3.0 3.0 3.020 3.5 2.0 3.0 3.5 3.5 3.0 3.0 3.030 3.0 3.5 3.5 3.5 2.5 2.5 2.0 2.540 3.5 2.5 5.5 14.5 3.0 2.5 2.5 2.545 4.0 3.5 15.0 26.0 2.5 2.5 3.5 2.550 4.0 2.5 22.0 33.0 2.5 3.5 2.5 2.560 3.5 3.0 3.0 8.0 2.5 2.5 2.5 2.570 4.0 3.5 3.5 3.5 2.5 3.5 2.5 3.080 4.0 3.5 3.5 3.5 2.5 3.5 2.5 3.090 4.0 3.5 3.5 3.5 2.5 3.5 2.5 3.0

(1.5;0) 0 5.0 4.0 4.0 4.0 4.5 4.0 4.0 4.010 5.0 4.0 4.0 2.5 4.0 4.0 4.0 4.020 5.0 4.0 3.0 4.0 4.0 4.0 4.0 4.030 5.0 4.0 4.0 4.0 4.0 4.0 4.0 4.040 5.0 4.0 2.0 4.0 4.0 4.0 2.0 4.045 5.0 4.0 4.0 4.0 4.0 4.0 4.0 4.050 5.0 4.0 4.0 4.0 4.0 4.0 4.0 4.060 5.0 4.0 4.0 4.0 4.0 4.0 4.0 4.070 5.0 4.0 4.0 4.0 4.0 4.0 4.0 4.080 4.0 3.0 4.5 10.5 4.0 4.0 3.5 3.590 4.5 3.5 8.0 32.0 4.0 4.0 3.5 4.0

(1.5;0.5) 0 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.510 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.520 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.530 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.540 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.545 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.550 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.560 2.0 4.0 4.0 5.0 4.0 3.5 3.0 2.570 4.0 3.5 4.0 28.0 4.0 4.0 4.0 2.080 2.0 4.0 4.0 20.5 4.0 4.0 4.0 2.090 3.0 3.0 3.0 3.0 3.0 3.0 2.5 3.0

(1.5;1.0) 0 3.0 4.0 3.5 3.0 4.0 3.0 3.0 4.010 3.0 4.0 3.5 3.0 4.0 3.0 3.0 4.020 3.0 4.0 3.5 3.0 4.0 3.0 3.0 4.030 3.0 4.0 3.5 3.0 4.0 3.0 3.0 4.040 4.5 3.0 4.0 4.0 3.0 2.5 4.0 2.545 4.5 3.0 4.0 4.0 3.0 2.5 4.0 2.550 4.5 3.5 4.0 10.0 4.0 4.5 4.0 4.060 4.0 4.0 4.0 25.0 4.0 4.0 4.0 4.070 4.0 3.5 4.0 9.5 4.0 4.0 3.5 3.580 4.0 4.0 3.5 4.0 4.0 4.0 3.5 4.090 4.0 4.0 3.5 4.0 4.0 4.0 3.5 4.0

52

Page 73: VLC based Position Estimation for Robotic Navigation

Table B.4: Signal received in each PD in fourth quadrantPosition Azimuth Signal Amplitude in the different PDs

90◦ 79◦ 68◦ 58◦ 47.5◦ 37.5◦ 27.5◦ 18.5◦

(0.5;-0.5) 0 3.5 2.5 3.5 3.5 4.0 4.0 3.0 2.510 4.0 4.0 3.5 3.5 2.5 3.5 3.0 3.520 4.0 4.0 3.5 3.5 2.5 3.5 2.5 4.030 5.0 24.5 3.5 3.5 2.5 3.5 2.5 2.040 4.0 39.0 8.0 4.0 2.5 3.0 3.5 3.545 5.5 39.5 8.5 3.5 3.5 3.5 2.5 3.550 4.5 38.5 10.0 3.5 2.5 3.0 3.0 3.060 4.0 31.0 4.0 3.5 2.5 3.5 3.0 2.570 4.0 10.5 3.5 3.5 2.5 3.5 3.0 2.080 4.5 3.5 3.5 4.0 3.0 3.5 3.5 4.090 4.0 3.5 3.5 4.0 3.5 3.0 3.0 3.5

(0.5;-1.0) 0 4.5 3.5 8.5 4.0 3.0 3.0 3.0 4.010 4.5 3.5 8.5 4.0 3.0 3.0 3.0 4.020 5.0 4.0 8.0 3.5 3.5 3.5 4.0 3.030 5.0 4.0 44.0 4.0 4.0 3.5 4.0 3.040 4.0 4.0 39.0 4.0 2.5 3.5 3.5 4.045 4.0 3.5 16.5 4.0 3.0 4.0 3.0 3.050 4.0 4.0 4.0 4.0 3.5 2.5 3.5 3.060 4.0 4.0 4.0 4.0 3.5 2.5 3.5 3.070 4.0 4.0 4.0 4.0 3.5 2.5 3.5 3.080 4.0 4.0 4.0 4.0 3.5 2.5 3.5 3.090 4.0 4.0 4.0 4.0 3.5 2.5 3.5 3.0

(1.0;-0.5) 0 4.5 4.0 4.5 4.0 3.0 3.0 3.5 3.510 4.5 4.0 4.5 4.0 3.0 3.0 3.5 3.520 4.5 4.0 4.5 4.0 3.0 3.0 3.5 3.530 4.5 4.0 3.5 4.0 4.0 3.5 4.0 2.540 4.5 4.0 3.5 4.0 4.0 3.5 4.0 2.545 4.5 4.0 6.0 4.0 4.0 3.5 4.0 2.550 4.0 3.5 26.5 4.0 4.0 3.5 3.5 4.060 4.5 4.0 43.5 5.0 4.0 4.0 3.5 3.570 4.5 4.0 43.0 5.0 3.5 2.5 3.5 3.580 4.5 3.5 24.0 3.5 3.0 3.5 4.0 3.590 4.5 5.0 4.5 4.5 4.5 4.5 3.5 4.5

(1.5;-0.5) 0 4.0 3.5 4.0 4.0 3.0 3.0 3.0 3.010 4.0 3.5 4.0 4.0 3.0 3.0 3.0 3.020 4.0 3.5 4.0 4.0 3.0 3.0 3.0 3.030 4.0 3.5 4.0 4.0 3.0 3.0 3.0 3.040 4.0 4.0 4.0 4.0 3.0 3.0 2.5 4.045 4.0 4.0 4.0 4.0 3.0 3.0 2.5 4.050 4.0 4.0 4.0 4.0 3.0 3.0 2.5 4.060 4.0 3.0 3.5 5.5 3.5 3.5 3.5 3.570 5.5 4.0 4.5 37.5 3.5 3.5 3.5 4.080 4.0 4.0 4.0 28.5 3.0 4.0 4.0 2.590 4.0 3.5 2.5 4.0 3.5 4.0 3.5 3.5

53

Page 74: VLC based Position Estimation for Robotic Navigation

54