We investigate the scenario where a perturbed nonlinear system transmits its output measurements to a remote observer via a packet-based communication network. The sensors are grouped into N nodes and each of these nodes decides when its measured data is transmitted over the network independently. The objective is to design both the observer and the local transmission policies in order to obtain accurate state estimates, while only sporadically using the communication network. In particular, given a general nonlinear observer designed in continuous-time satisfying an input-to-state stability property, we explain how to systematically design a dynamic event-triggering rule for each sensor node that avoids the use of a copy of the observer, thereby keeping local calculation simple. We prove the practical convergence property of the estimation error to the origin and we show that there exists a uniform strictly positive minimum inter-event time for each local triggering rule under mild conditions on the plant. The efficiency of the proposed techniques is illustrated on a numerical case study of a flexible robotic arm.
Various methods are nowadays available to design observers for broad classes of systems. Nevertheless, the question of the tuning of the observer to achieve satisfactory estimation performance remains largely open. This paper presents a general supervisory design framework for online tuning of the observer gains with the aim of achieving various trade-offs between robustness and speed of convergence. We assume that a robust nominal observer has been designed for a general nonlinear system and the goal is to improve its performance. We present for this purpose a novel hybrid multi-observer, which consists of the nominal one and a bank of additional observer-like systems, that are collectively referred to as modes and that differ from the nominal observer only in their output injection gains. We then evaluate on-line the estimation cost of each mode of the multi-observer and, based on these costs, we select one of them at each time instant. Two different strategies are proposed. In the first one, initial conditions of the modes are reset each time the algorithm switches between different modes. In the second one, the initial conditions are not reset. We prove a convergence property for the hybrid estimation scheme and we illustrate the efficiency of the approach in improving the performance of a given nominal high-gain observer on a numerical example.
We propose a certainty-equivalence scheme for adaptive control of scalar linear systems subject to additive, i.i.d. Gaussian disturbances and bounded control input constraints, without requiring prior knowledge of the bounds of the system parameters, nor the control direction. Assuming that the system is at-worst marginally stable, mean square boundedness of the closed-loop system states is proven. Lastly, numerical examples are presented to illustrate our results.
– This paper considers controlled scalar systems relying on a lossy wireless feedback channel. In contrast with the existing literature, the focus is not on the system controller but on the wireless transmit power controller that is implemented at the system side for reporting the state to the controller. Such a problem may be of interest, e.g. , for the remote control of drones, where communication costs may have to be considered. Determining the power control policy that minimizes the combination of the dynamical system cost and the wireless transmission energy is shown to be a non-trivial optimization problem. It turns out that the recursive structure of the problem can be exploited to determine the optimal power control policy. As illustrated in the numerical performance analysis, in the scenario of a dynamics without perturbations, the optimal power control policy consists in decreasing the transmit power at the right pace. This allows a significant performance gain compared to conventional policies such as the full transmit power policy or the open-loop policy.
We study emulation-based state estimation for non-linear plants that communicate with a remote observer over a shared wireless network subject to packet losses. To reduce bandwidth usage, a stochastic communication protocol is employed to determine which node should be given access to the network. We describe the overall wireless system as a hybrid model, which allows us to capture the behaviour both between and at transmission instants, whilst covering network features such as random transmission instants, packet losses, and stochastic scheduling. Under this setting, we provide sufficient conditions on the transmission rate that guarantee an input-to-state stability property for the corresponding estimation error system. We illustrate our results with an example of Lipschitz non-linear plants.
This paper presents two schemes to jointly estimate parameters and states of discrete-time nonlinear systems in the presence of bounded disturbances and noise. The parameters are assumed to belong to a known compact set. Both schemes are based on sampling the parameter space and designing a state observer for each sample. A supervisor selects one of these observers at each time instant to produce the parameter and state estimates. In the first scheme, the parameter and state estimates are guaranteed to converge within a certain margin of their true values in finite time, assuming that a sufficiently large number of observers is used and a persistence of excitation condition is satisfied in addition to other observer design conditions. This convergence margin is constituted by a part that can be chosen arbitrarily small by the user and a part that is determined by the noise levels. The second scheme exploits the convergence properties of the parameter estimate to perform subsequent zoom-ins on the parameter subspace to achieve stricter margins for a given number of observers. The strengths of both schemes are demonstrated using a numerical example.
To investigate solutions of (near-)optimal control problems, we extend and exploit a notion of homogeneity recently proposed in the literature for discrete-time systems. Assuming the plant dynamics is homogeneous, we first derive a scaling property of its solutions along rays provided the sequence of inputs is suitably modified. We then consider homogeneous cost functions and reveal how the optimal value function scales along rays. This result can be used to construct (near-)optimal inputs on the whole state space by only solving the original problem on a given compact manifold of a smaller dimension. Compared to the related works of the literature, we impose no conditions on the homogeneity degrees. We demonstrate the strength of this new result by presenting a new approximate scheme for value iteration, which is one of the pillars of dynamic programming. The new algorithm provides guaranteed lower and upper estimates of the true value function at any iteration and has several appealing features in terms of reduced computation. A numerical case study is provided to illustrate the proposed algorithm.
We present an event-triggered observer design for linear time-invariant systems, where the measured output is sent to the observer only when a triggering condition is satisfied. We proceed by emulation and we first construct a continuous-time Luenberger observer. We then propose a dynamic rule to trigger transmissions, which only depends on the plant output and an auxiliary scalar state variable. The overall system is modeled as a hybrid system, for which a jump corresponds to an output transmission. We show that the proposed event-triggered observer guarantees global practical asymptotic stability for the estimation error dynamics. Moreover, under mild boundedness conditions on the plant state and its input, we prove that there exists a uniform strictly positive minimum inter-event time between any two consecutive transmissions, guaranteeing that the system does not exhibit Zeno solutions. Finally, the proposed approach is applied to a numerical case study of a lithium-ion battery.
Cooperative Adaptive Cruise Control (CACC) is a vehicular technology that allows groups of vehicles on the highway to form in closely-coupled automated platoons to increase highway capacity and safety. The underlying mechanism behind CACC is the use of Vehicle-to-Vehicle (V2V) wireless communication networks to transmit acceleration commands to adjacent vehicles in the platoon. However, the use of V2V networks leads to increased vulnerabilities against faults and cyberattacks. Here, we address the problem of increasing the robustness of CACC schemes against cyberattacks by using multiple V2V networks and a data fusion algorithm. The idea is to transmit acceleration commands multiple times through different communication channels to create redundancy at the receiver side. We propose a data fusion algorithm to estimate of the true acceleration command, and isolate compromised channels. Finally, we propose a robust $H_{\infty }$ controller that reduces the joint effect of fusion errors and sensor/channel noise in the platooning performance (tracking performance and string stability). Simulation results are presented to illustrate the performance of our approach.
Originating in the artificial intelligence literature, optimistic planning (OP) is an algorithm that generates near-optimal control inputs for generic nonlinear discrete-time systems whose input set is finite. This technique is, therefore, relevant for the near-optimal control of nonlinear switched systems for which the switching signal is the control, and no continuous input is present. However, OP exhibits several limitations, which prevent its desired application in a standard control engineering context, as it requires, for instance, that the stage cost takes values in <inline-formula><tex-math notation="LaTeX">$[0, 1]$</tex-math></inline-formula>, an unnatural prerequisite, and that the cost function is discounted. In this article, we modify OP to overcome these limitations, and we call the new algorithm <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula>. We then analyze <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> under general stabilizability and detectability assumptions on the system and the stage cost. New near-optimality and performance guarantees for <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> are derived, which have major advantages compared to those originally given for OP. We also prove that a system whose inputs are generated by <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> in a receding-horizon fashion exhibits stability properties. As a result, <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> provides a new tool for the near-optimal, stable control of nonlinear switched discrete-time systems for generic cost functions.
We address the problem of robust state reconstruction for discrete-time nonlinear systems when the actuators and sensors are injected with (potentially unbounded) attack signals. Exploiting redundancy in sensors and actuators and using a bank of unknown input observers (UIOs), we propose an observer-based estimator capable of providing asymptotic estimates of the system state and attack signals under the condition that the numbers of sensors and actuators under attack are sufficiently small. Using the proposed estimator, we provide methods for isolating the compromised actuators and sensors. Numerical examples are provided to demonstrate the effectiveness of our methods.
We introduce a sequential learning algorithm to address a robust controller tuning problem, which in effect, finds (with high probability) a candidate solution satisfying the internal performance constraint to a chance-constrained program which has black-box functions. The algorithm leverages ideas from the areas of randomised algorithms and ordinal optimisation, and also draws comparisons with the scenario approach; these have all been previously applied to finding approximate solutions for difficult design problems. By exploiting statistical correlations through black-box sampling, we formally prove that our algorithm yields a controller meeting the prescribed probabilistic performance specification. Additionally, we characterise the computational requirement of the algorithm with a probabilistic lower bound on the algorithm's stopping time. To validate our work, the algorithm is then demonstrated for tuning model predictive controllers on a diesel engine air-path across a fleet of vehicles. The algorithm successfully tuned a single controller to meet a desired tracking error performance, even in the presence of the plant uncertainty inherent across the fleet. Moreover, the algorithm was shown to exhibit a sample complexity comparable to the scenario approach.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više