### 1. Introduction

### 2. Challenges of WVSN Deployment

*v*

*’s FoV. This kind of situation has been considered in the proposed FoV selection algorithm (see Section 4) to find the best orientation of deployed video sensors while avoiding occluded regions, as shown in Fig. 1(b). In this figure region 3 is covered by video node*

_{1}*v*

*’s FoV, which is not occluded by an obstacle.*

_{2}### 3. Coverage in WVSNs

### 3.1 Related Work

Known-targets coverage: where video nodes try to monitor a set of targets (discrete points).

Barrier coverage: where the aim is to achieve a static arrangement of nodes that minimize the probability of undetected penetration through the barrier.

Area coverage: this aims to find a set of video nodes in order to ensure the coverage of the entire area.

### 3.2 Video Sensor Node Model

*v*is defined as a sector denoted by a 5-tuple

*v*(

*P,R*

_{s}*, V⃗, α,d*) where:

*P*refers to the position of video sensor node*v*(located at point*P*(*p.x, p.y*)),*R*is its sensing range,_{s}*V⃗*refers to the vector representing the line of sight of the camera’s FoV, which determines the sensing direction,*α*is the offset angle of the FoV on both sides of*V⃗*(an angle of view [AoV] is represented by 2*α*),*d*refers to the depth of view of the camera.

### 4. Rotational Video Sensor Node Model with Obstacle Avoidance

All video sensors have the same sensing range (

*R*), communication range (_{s}*R*), and offset angle_{c}*α*.Video sensors within the

*R*of a sensor are called the sensor’s neighboring nodes._{c}The sensing direction of each video sensor is rotational.

Each sensor knows its location information and determines the location of its neighboring sensors by using wireless communications.

The target region is on a two-dimensional plane with the presence of obstacles.

*P*is the position of the video sensor that can switch to three possible directions of

*V⃗*

*,*

_{1}*V⃗*

*, and*

_{2}*V⃗*

*.*

_{3}*V⃗*

*is the direction that the sensor faces when it is deployed and the shadowed sector above*

_{1}*V⃗*

*is the sensing region of the video sensor when it works in*

_{1}*V⃗*

*.*

_{1}_{1}are calculated as described below.

*t*

*, where:*

_{1}*p*

_{x}*, p*

*) of the sensor node and we have already chosen a depth of view*

_{y}*d*, we can then calculate the coordinates of the three following different points (illustrated in Fig. 2) in order to obtain the desired FoV:

_{1}defined in Eq. (1), the video sensor can calculate the second and the third direction with their respective FoV using the following equations, respectively:

_{2}and the third FoV

_{3}of the video sensor node.

*(obstacleList.intersects(Segment(FoV.p, FoV*

_{.}*v)))*(see Algorithm 1, line 17), which comes back with a response of “true” if at least one segment from the obstacle list intersects with the line of sight of a FoV

*(Segment(FoV.p, FoV.v))*.

*n*recursive calls in the worst case. Therefore, the time complexity of this algorithm is

*O*(

*n*), where

*n*is the number of neighbor sensors of node

*v*.

##### Algorithm 1

### 5. Cover Set Construction Strategies and Video Nodes’ Scheduling

### 5.1 Coverage with Fault Tolerance

#### Definition 1

*Co*

_{i}*(v)*of a video node

*v*is defined as a subset of video nodes such that: ∪

*v*′ ɛ

*Co*

*(*

_{i}*v*) (

*v*’s FoV area) covers

*v*’s FoV area [17].

#### Definition 2

*g*) in order to get a higher number of cover sets and to avoid neighbors’ FoVs, but a high percentage of them have the possibility that two or more cover sets have some video nodes in common.

*is_inside( )*feature of the extra graphical library defined by [17] to know whether a sensor’s FoV covers a given point. More details of this implementation are presented in Algorithm 2.

##### Algorithm 2

*v*’s FoV is represented by six points (

*p, b, c, gp’, gb’, and gc’*).

*p*,

*b*, and

*c*are the edges of the FoV represented by a triangle,

*gp’, gb’,*and

*gc’*are the midpoints between the mid-point of their respective segments [

*pg*], [

*bg*], and [

*cg*] and the barycenter

*g*.

*v*has to find the following sets (at least one of the points (

*p*,

*b*and

*c*) and another point

*gp’, gb’,*or

*gc’)*:

*PG*=*{v*_{1}*, v*_{4}*}*where*v*and_{1}*v*cover the points_{4}*p*and*gp’*of*v*’s FoV,*BG*=*{v*_{3}*}*where*v*covers the points_{3}*b*and*gb’*of*v*’s FoV,*CG*=*{v*_{2}*, v*_{5}*}*where*v*and_{2}*v*cover the points_{5}*c*and*gc’*of*v*’s FoV.

*v*can construct the set of cover sets (calculated by the Cartesian product of

*PG*,

*BG*and

*CG*) as follows:

*Co(v) = {{v},{v*_{1}*, v*_{3,}*v*_{2}*}, {v*_{4}*, v*_{3,}*v*_{2}*}, {v*_{1}*, v*_{3}*, v*_{5}*}, {v*_{4}*, v*_{3}*, v*_{5}*}}*

*n+m*recursive calls in the worst case. Therefore, the intersection of two sets can be done with the complexity of

*O*(

*n+m*), where

*n*and

*m*are the cardinals of the two sets, respectively.

*{v*

_{1}

*, v*

_{3,}*v*

_{2}

*}*is in active state and suddenly video node

*v*

*fails due to a natural disaster, then another cover set, such as*

_{1}*{v*

_{4}*, v*

_{3}*, v*

_{2}*}*, can be selected to switch to an active state in order to ensure the coverage of the uncovered region.

### 5.2 Coverage with High-Accuracy

*g*) and the position

*p*of each video sensor node included in the set of its neighbors after the neighborhood discovery.

*p*and the barycenter (

*g*) of a video node

*v*denoted by

*d*(

*v.p, v.g*), where:

*p*of the neighbor

*v*’ of video node

*v*and the barycenter (

*g*) of this last one by

*d*(

*v’.p, v.g*), where:

*v*’s cover set have to verify that:

### 5.3 Rotational Video Sensor Nodes’ Scheduling

Every node orders its set of cover sets according to their cardinality, and then gives the higher priority to the cover sets with the minimum cardinality.

If two sets or more in the cover sets have the same cardinality, then, priority is given to the cover set with the highest level of energy.

For each round, after receiving the activity message of its neighbors, the sensor node tests if the active nodes belong to a cover set. If so, it goes into the sleep state, otherwise (in the case that we have a node failure), it decides to be active and diffuses its decision.

### 6. Experimental Results

_{R}]) by using the cover set construction strategy presented in Subsection 5.1. In order to address the coverage problem and to study the effect of the presence of obstacles in the area coverage, we have compared the proposed approach with the already existing model used in [17] (denoted WVSN), since it is based on the cover set construction approach. Some performance comparisons with another distributed algorithm [19] (denoted DVSA [mentioned in Subsection 3.1]) are also presented, so as to verify that the proposed approach can better enhance area coverage.

### 6.1 Simulation Environment

*P*and the direction

*V⃗*of a sensor node are chosen randomly. After a sensor node has received messages indicating the positions and directions of its neighbors, it can select an optimal direction that should not intersect with any obstacle in the field. The rest of the parameters are summarized in Table 1.

### 6.2 Performance Metrics

*Average percentage of coverage:*refers to the average percentage of the area covered by the set of active nodes over the initial coverage area computed after the end of all simulation rounds.*Average number of cover sets:*after the cover set construction phase, each node calculates the number of found cover sets. Then at the end of the simulation, statistics collected for the average number of cover sets for all the sensors are displayed.*Average percentage of active nodes:*represents the average number of nodes involved in the active set over the initial number of the deployed sensors obtained by calculating the average percentage of active nodes. This percentage is obtained through different simulation rounds until the end of the network’s lifetime.

### 6.3 Performance Results

#### 6.3.1 Average percentage of coverage

_{R}. The reason is that the DVSA uses the mobility function and that WVSN

_{R}, which is useful when we have a network, is not dense enough. However, when the number of sensor nodes exceeds 150, WVSN

_{R}performs better since only using rotation functionalities requires less of a response time and a quicker adjustment of sensors’ FoVs. We can also observe that the presence of obstacles severely affects the quality of coverage by using the model without rotation functionalities (WVSN). Since the proposed algorithm also decreases the obstacles’ detrimental effects by avoiding occluded FoVs, these enhancements allow us to obtain a significant improvement of coverage performance.

_{R}model compared with the existing WVSN and DVSA models. The reason is that in the proposed model we have to select only FoVs that are not obscured with obstacles and we always choose the direction at first to cover the less covered region by other sensors.