Saturday, March 30, 2019

Advantages And Disadvantages Of Smart Antenna

Advantages And Disadvantages Of brisk AntennaThe Direction of Arrival (DOA) love algorithm which may take various forms gener entirelyy hook up withs from the homogeneous resolvent of the ruffle fittedity. The models of interest in this dissertation may every bit pay to an EM jar as puff up as to an acoustic wave. assuming that the extension ph champion model is fundamentally the same, we leave behind, for analytical expediency, show that it fanny follow from the solution of Maxwells equations, which clearly are entirely valid for EM waves. In empty space the equation mountain be written as=0 (3.1)=0 (3.2)(3.3)(3.4)where . and -, respectively, de none the divergence and curl. Further more than, B is the magnetic induction. E denotes the electric product line, whereas and are the magnetic and dielectric continuals respectively. Invoking 3.1 the future(a) curl piazza guides as(3.5)(3.6)(3.7)The constant c is worldwidely liftred to as the speed of propagation. For EM waves in free space, it follows from the derivation c = 1 / = 3 x m / s. The homogeneous wave equation (3.7) constitutes the physical motivation for our fictional data model, regardless of the type of wave or medium. In both(prenominal)(prenominal) maskings, the chthoniclying physics are irrelevant, and it is merely the mathematical structure of the data model that counts.3.2 Plane waveIn the physics of wave propagation, a trim wave is a constant- absolute frequency wave whose wave fronts are un terpsichoreed parallel prostrates of constant peak-to-peak bounty ruler to the contour velocity transmitter.Actually, it is unrealizable to experience a rare skitter wave in practice, and only if a plane wave of infinite extent back end open as a plane wave. Actually, mevery waves are rough regarded as plane waves in a localized region of space, e.g., a localized come approximately(prenominal)(prenominal) as an transmitting aerial produces a field which is approxi mately a plane wave cold enough from the advance in its utter near-field region. Likely, we sens treat the waves as light rays which correspond locally to plane waves, when the length scales are much longer than the waves wavelength, as is often appearance of light in the field of optics.3.2.1 Mathematical descriptionTwo forms which befitting the criteria of having a constant frequency and constant premium are be as the sine or cosine functions. star of the simplest ways to expend much(prenominal) a sinusoid involves defining it along the perpetration of the x axis. As the equation shown below, it uses the cosine function to express a plane wave travelling in the positive x direction.(3.8)Where A(x,t) is the magnitude of the shown wave at a form bloom in space and season. is the premium of the wave which is the peak magnitude of the oscillation. k is the waves wave frame or more specifically the angular wave arrive and equals 2/, where is the wavelength of the wave. k has the units of radians per unit quad and is a standard of how rapidly the disturbance changes all over a accustomed distance at a particular point in beat.x is a point along the x axis. y and z are not considered in the equation because the waves magnitude and sort are the same at every point on each given y-z plane. This equation narrow downs what that magnitude and pattern are. is the waves angular frequency which equals 2/T, and T is the period of the wave. In detail, izzard, has the units of radians per unit epoch and is also a standard of how rapid the disturbance changing in a given length of time at a particular point in space. is a given particular point in time, and varphi , is the wave point shift with the units of radians. It mustinessiness make clear that a positive phase shift will shifts the wave along the negative x axis direction at a given point of time. A phase shift of 2 radians designates shifting it atomic number 53 wavelength exactly. Other formulations which at a time use the waves wavelength, period T, frequency f and velocity c, are shown as followsA=A_o cos2pi(x/lambda- t/T) + varphi, (3.9)A=A_o cos2pi(x/lambda- ft) + varphi, (3.10)A=A_o cos(2pi/lambda)(x- ct) + varphi, (3.11)To appreciate the equivalence of the in a higher place set of equations denote thatf=1/T,andc=lambda/T=omega/k,3.2.2 ApplicationPlane waves are solutions for a scalar wave equation in the homogeneous medium. As for transmitter wave equations, e.g., waves in an pliant solid or the ones describing electromagnetic radiation, the solution for the homogeneous medium is similar. In vector wave equations, the scalar amplitude is replaced by a constant vector. e.g., in electromagnetism is the vector of the electric field, magnetic field, or vector potential. The transverse wave is a kind of wave in which the amplitude vector is perpendicular to k, which is the shimmy for electromagnetic waves in an isotropic space. On the contrast, the long itudinal wave is a kind of wave in which the amplitude vector is parallel to k, typically, such as for acoustic waves in a gas or fluid.The plane wave equation is true for capricious combinations of and k. However, all real physical mediums will only allow such waves to propagate for these combinations of and k that satisfy the dispersion ana poundy of the mediums. The dispersion relation is often demonstrated as a function, (k), where ratio /k gives the magnitude of the phase velocity and d/dk denotes the group velocity. As for electromagnetism in an isotropic case with index of refraction coefficient n, the phase velocity is c/n, which equals the group velocity on condition that the index is frequency freelance.In linear uniform case, a wave equation solution put forward be demonstrated as a superposition of plane waves. This manner is know as the Angular Spectrum method. Actually, the solution form of the plane wave is the general con rank of translational symmetry. And in the more general case, for day-after-day structures with discrete translational symmetry, the solution takes the form of Bloch waves, which is roughly famous in crystalline atomic materials, in the photonic crystals and other periodic wave equations.3.3 referenceMany physical phenomena are either a result of waves propagating by a medium or exhibit a wave manage physical manifestation. Though 3.7 is a vector equation, we only consider one of its components, say E(r,t) where r is the radius vector. It will later be expect that the measured sensor outputs are proportional to E(r,t). Interestingly enough, any field of the form E(r,t) = , which satisfies 3.7, provided with T denoting transposition. Through its dependence on only, the solution burn be interpreted as a wave traveling in the direction, with the speed of propagation. For the last mentioned reason, is referred to as the slowness vector. The chief interest herein is in narrowband forcing functions. The details of ge nerating such a forcing function can be found in the classic book by Jordan 59. In manifold notation 63 and taking the origin as a reference, a narrowband transmit waveform can be expressed as(3.12)where s(t) is slowly time varying compared to the carrier . For, where B is the bandwidth of s(t), we can write(3.13)In the choke equation 3.13, the supposed wave vector was introduced, and its magnitude is the wavenumber. One can also write, where is the wavelength. Make sure that k also points in the direction of propagation, e.g., in the x-y plane we can get(3.14)where is the direction of propagation, defined antipathetical clockwise relative the x axis. It should be noted that 3.12 implicitly sour far-field conditions, since an isotropic, which refers to uniform propagation/transmission in all directions, point source gives rise to a spherical traveling wave whose amplitude is in return proportional to the distance to the source. All points lying on the surface of a sphere of r adius R will then share a rough-cut phase and are referred to as a wave front. This indicates that the distance amidst the emitters and the receiving feeler straddle determines whether the spherical degree of the wave should be taken into account. The reader is referred to e.g., 10, 24 for treatments of near field reception. Far field receiving conditions signify that the radius of propagation is so large that a flat plane of constant phase can be considered, thus resulting in a plane wave as indicated in Eq. 8. Though not necessary, the latter will be our assumed working model for convenience of exposition. pull down that a linear medium implies the validity of the superposition principle, and thus allows for more than one traveling wave. Equation 8 carries twain spatial and profane reading and represents an adequate model for distinguishing augurys with distinct spatial-temporal argumentations. These may come in various forms, such as DOA, in general azimuth and elevation , indicate polarization, transmitted waveforms, temporal frequency etc. Each emitter is generally associated with a set of such characteristics. The interest in unfolding the signal arguments forms the vegetable marrow of sensor array signal processing as presented herein, and continues to be an rigoro apply(a) and active topic of explore.3.4 Smart antennaSmart antennas are devices which lodge their radiation pattern to achieve modify performance either present or capacity or some combination of these 1.The rapid developing in demand for mobile communications services has encouraged research into the design of wireless systems to repair spectrum talent, and increase link quality 7. apply existing methods more effective, the yen antenna technology has the potential to importantly increase the wireless. With intelligent control of signal transmission and reception, capacity and reportage of the mobile wireless network, communications applications can be significantly im proved 2.In the communication system, the ability to distinguish different users is essential. The smart antenna can be use to add increased spatial diversity, which is referred to as Space variability Multiple Access (SDMA). Conventionally, employment of the most public dual access scheme is a frequency division multiple access (FDMA), Time Division Multiple Access (TDMA), and Code Division Multiple Access (CDMA). These independent users of the program, frequency, time and code domain were given three different take aims of diversity.Potential benefits of the smart antenna show in many ways, such as anti-multipath fading, cut the delay extended to offer smart antenna holding high data rate, interference supplantion, reducing the distance effect, reducing the outage opportunity, to improve the BER (Bit Error Rate)performance, increasing system capacity, to improve unearthly efficiency, supporting on the table and efficient handoff to expand cell coverage, flexible managem ent of the district, to extend the battery life of mobile station, as well as lower maintenance and operating costs.3.4.1 Types of Smart AntennasThe environment and the systems requirements try the type of Smart Antennas. There are two main types of Smart Antennas. They are as followsPhased Array AntennaIn this type of smart antenna, there will be a number of fixed beams between which the beam will be turned on or steered to the locate signal. This can be done, only in the foundationage stage of accommodation to help. In other words, as wanted by the moving steer, the beam will be the Steering 2.Adaptive Array Antenna interconnected with adaptational digital signal processing technology, the smart antenna uses digital signal processing algorithm to measure the signal strength of the beam, so that the antenna can dynamically change the beam which transmit big businessman concentrated, as figure 3.2 shows. The application of spatial processing can promote the signal capacity , so that multiple users share a channel.Adaptive antenna array is a closed-loop feedback control system consisting of an antenna array and real-time adaptive signal receiver processor, which uses the feedback control method for automatic conjunction of the antenna array pattern. It formed nulling interference signal offset in the direction of the interference, and can strengthen a useful signal, so as to achieve the purpose of anti-jamming 3.Figure 2 click for text interpretationFigure 3.23.4.2 Advantages and disadvantages of smart antennaAdvantagesFirst of all, a high level of efficiency and power are provided by the smart antenna for the target signal. Smart antennas generate narrow pencil beams, when a big number of antenna elements are used in a high frequency condition. Thus, in the direction of the target signal, the efficiency is significantly high. With the help of adaptive array antennas, the same amount times the power gain will be produce, on condition that a fixed n umber of antenna elements are used. some other improvement is in the amount of interference which is suppressed. Phased array antennas suppress the interference with the narrow beam and adaptive array antennas suppress by riging the beam pattern 2.DisadvantagesThe main disadvantage is the cost. Actually, the cost of such devices will be more than before, not only in the electronics section, exactly in the energy. That is to say the device is too expensive, and will also devolve the life of other devices. The receiver chains which are used must be decreased in order to reduce the cost. Also, because of the use of the RF electronics and A/D converter for each antenna, the costs are increasing.Moreover, the surface of the antenna is another problem. Large base stations are unavoidable to make this method to be efficient and it will increase the size, away from this multiple external antennas needed on each terminal.Then, when the diversity is concerned, disadvantages are occurred . When mitigation is needed, diversity becomes a serious problem. The terminals and base stations must equip with multiple antennas.3.5 White reverberate White dissension is a haphazard signal with a flat power spectral assiduousness . In another word, the signal contains the equal power within a particular bandwidth at the centre frequency. White preventative draws its name from washcloth light where the power spectral stringency of the light is distributed in the glaring band. In this way, the eyes three colour receptors are approximately equally stimulated .In statistical case, a time series can be characterized as having weak ashen randomness on condition that is a sequence of serially uncorrelated random vibrations with zero slopped and finite mutation. Especially, strong flannel noise has the quality to be independent and identically distributed, which means no autocorrelation. In particular, the series is called the Gaussian etiolated noise 1, if is greensly distributed and it has zero mean and standard deviation.Actually, an infinite bandwidth white noise signal is just a theoretical construction which cannot be reached. In practice, the bandwidth of white noise is restricted by the transmission medium, the instrument of noise generation, and finite observation capabilities. If a random signal is find with a flat spectrum in a mediums widest possible bandwidth, we will refer it as white noise.3.5.1 Mathematical definitionWhite random vectorA random vector W is a white random vector only if its mean vector and autocorrelation ground substance are alike to the followsmu_w = mathbbE mathbfw = 0 (3.15)R_ww = mathbbE mathbfw mathbfwT = sigma2 mathbfI . (3.16)That is to say, it is a zero mean random vector, and its autocorrelation matrix is a multiple of the identity matrix. When the autocorrelation matrix is a multiple of the identity, we can regard it as spherical correlation.White random processA time continuous random process where is a white noise signal only if its mean function and autocorrelation function satisfy the following equationmu_w(t) = mathbbE w(t) = 0 (3.17)R_ww(t_1, t_2) = mathbbE w(t_1) w(t_2) = (N_0/2)delta(t_1 t_2). (3.18)That is to say, it is zero mean for all time and has infinite power at zero time shift since its autocorrelation function is the Dirac delta function.The above autocorrelation function implies the following power spectral tightfistedness. Since the Fourier transform of the delta function is equal to 1, we can implyS_ww(omega) = N_0/2 , (3.19)Since this power spectral immersion is the same at all frequencies, we define it white as an analogy to the frequency spectrum of white light. A initiation to random elements on infinite dimensional spaces, e.g. random fields, is the white noise measure.3.5.2 Statistical propertiesThe white noise is uncorrelated in time and does not restrict the values a signal can take. Any dispersal of values about the white noise is possible. Ev en a so-called binary signal that can only take the values of 1 or -1 will be white on condition that the sequence is statistically uncorrelated. Any noise with a continuous scattering, like a sane dispersal, can be white noise certainly.It is often wrong assumed that Gaussian noise is necessarily white noise, yet incomplete property implies the other. Gaussianity refers to the chance dispersal with respect to the value, in this context the probability of the signal reaching amplitude, while the term white refers to the way the signal power is distributed over time or among frequencies. Spectrogram of pink noise (left) and white noise (right), showed with linear frequency axis (vertical).We can indeed find Gaussian white noise, but also Poisson, Cauchy, etc. white noises. Thus, the two words Gaussian and white are often both specified in mathematical models of systems. Gaussian white noise is a good approximation of many real-world situations and generates mathematically tra ctable models. These models are used so frequently that the term additive white Gaussian noise has a standard abbreviation AWGN. White noise is the generalized mean-square differential coefficient of the Wiener process or Brownian motion.3.6 Normal DistributionIn probability theory, the normal (or Gaussian) distribution is a continuous probability distribution that has a campana-shaped probability density function, known as the Gaussian function or informally as the bell tailor1.f(xmu,sigma2) = frac1sigmasqrt2pi e -frac12left(fracx-musigmaright)2 The parameter is the mean or expectation (location of the peak) and 2 is the edition. is known as the standard deviation. The distribution with = 0 and 2 = 1 is called the standard normal distribution or the unit normal distribution. A normal distribution is often used as a first approximation to describe real-valued random variables that cluster around a undivided mean value.http//upload.wikimedia.org/wikipedia/commons/thumb/8/8c/S tandard_deviation_diagram.svg/325px-Standard_deviation_diagram.svg.pngThe normal distribution is considered the most prominent probability distribution in statistics. There are several(prenominal) reasons for this1 First, the normal distribution arises from the central limit theorem, which states that under mild conditions, the mean of a large number of random variables drawn from the same distribution is distributed approximately commonly, irrespective of the form of the original distribution. This gives it exceptionally wide application in, for example, sampling. Secondly, the normal distribution is very tractable analytically, that is, a large number of results involving this distribution can be derived in explicit form.For these reasons, the normal distribution is commonly encountered in practice, and is used throughout statistics, natural sciences, and social sciences 2 as a simple model for complex phenomena. For example, the observational error in an experiment is usually as sumed to follow a normal distribution, and the propagation of uncertainty is computed using this assumption. Note that a normally distributed variable has a symmetric distribution about its mean. Quantities that grow exponentially, such as prices, incomes or populations, are often skewed to the right, and hence may be fall apart described by other distributions, such as the log-normal distribution or Pareto distribution. In addition, the probability of seeing a normally distributed value that is far (i.e. more than a few standard deviations) from the mean drops off exceedingly rapidly. As a result, statistical inference using a normal distribution is not robust to the presence of outliers (data that are unexpectedly far from the mean, due to exceptional circumstances, observational error, etc.). When outliers are expected, data may be better described using a heavy-tailed distribution such as the Students t-distribution.3.6.1 Mathematical DefinitionThe simplest case of a normal di stribution is known as the standard normal distribution, described by the probability density functionphi(x) = frac1sqrt2pi, e- fracscriptscriptstyle 1scriptscriptstyle 2 x2.The factor scriptstyle 1/sqrt2pi in this expression ensures that the total area under the curve (x) is equal to oneproof, and 12 in the exponent makes the width of the curve (measured as half the distance between the inflection points) also equal to one. It is traditional in statistics to denote this function with the Greek letter (phi), whereas density functions for all other distributions are usually denoted with letters f or p.5 The alternative glyph is also used quite often, however within this condition is reserved to denote characteristic functions.Every normal distribution is the result of exponentiating a quadratic function (just as an exponential distribution results from exponentiating a linear function)f(x) = ea x2 + b x + c. ,This yields the classic bell curve shape, provided that a 0 everywhere . One can adjust a to control the width of the bell, then adjust b to move the central peak of the bell along the x-axis, and at long last one must choose c such that scriptstyleint_-inftyinfty f(x),dx = 1 (which is only possible when a Rather than using a, b, and c, it is far more common to describe a normal distribution by its mean = b2a and variability 2 = 12a. Changing to these new parameters allows one to rewrite the probability density function in a convenient standard form,f(x) = frac1sqrt2pisigma2, efrac-(x-mu)22sigma2 = frac1sigma, phileft(fracx-musigmaright).For a standard normal distribution, = 0 and 2 = 1. The last part of the equation above shows that any other normal distribution can be regarded as a version of the standard normal distribution that has been stretched horizontally by a factor and then translated rightward by a distance . Thus, specifies the position of the bell curves central peak, and specifies the width of the bell curve.The parameter is at th e same time the mean, the median and the mode of the normal distribution. The parameter 2 is called the variance as for any random variable, it describes how concentrated the distribution is around its mean. The square root of 2 is called the standard deviation and is the width of the density function.The normal distribution is usually denoted by N(,2).6 Thus when a random variable X is distributed normally with mean and variance 2, we writeX sim mathcalN(mu,,sigma2). ,3.6.2 choice formulationsSome authors advocate using the precision instead of the variance. The precision is normally defined as the reciprocal of the variance ( = 2), although it is occasionally defined as the reciprocal of the standard deviation ( = 1).7 This parameterization has an advantage in numerical applications where 2 is very close to zero and is more convenient to work with in analysis as is a natural parameter of the normal distribution. This parameterization is common in Bayesian statistics, as it simpl ifies the Bayesian analysis of the normal distribution. Another advantage of using this parameterization is in the study of conditional distributions in the multivariate normal case. The form of the normal distribution with the more common definition = 2 is as followsf(x,mu,tau) = sqrtfractau2pi, efrac-tau(x-mu)22.The question of which normal distribution should be called the standard one is also answered differently by various authors. Starting from the works of Gauss the standard normal was considered to be the one with variance 2 = 12 f(x) = frac1sqrtpi,e-x2Stigler (1982) goes even come on and insists the standard normal to be with the variance 2 = 12 f(x) = e-pi x2According to the author, this formulation is advantageous because of a much simpler and easier-to-remember formula, the fact that the pdf has unit height at zero, and simple approximate formulas for the quintiles of the distribution.3.7 trick outer-Rao throttleIn estimation theory and statistics, the driver-Rao j ump off (CRB) or Cramr-Rao lower jump out (CRLB), named in honor of Harald Cramer and Calyampudi Radhakrishna Rao who were among the first to derive it,123 expresses a lower environ on the variance of figurers of a deterministic parameter. The bound is also known as the Cramr-Rao inequality or the cultivation inequality.In its simplest form, the bound states that the variance of any indifferent(p) reckoner is at least as high as the inverse of the Fisher information. An un sloping calculator which achieves this lower bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all straightforward methods, and is therefore the stripped variance un curveed (MVU) estimator. However, in some cases, no neutral technique exists which achieves the bound. This may occur even when an MVU estimator exists.The Cramr-Rao bound can also be used to bound the variance of biased estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are below the so-so(p) Cramr-Rao lower bound see estimator bias.statementThe Cramr-Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section.Scalar unbiased caseSuppose theta is an unknown deterministic parameter which is to be estimated from measurements x, distributed according to some probability density function f(xtheta). The variance of any unbiased estimator hattheta of theta is then bounded by the reciprocal of the Fisher information I(theta)mathrmvar(hattheta) geq frac1I(theta)where the Fisher information I(theta) is defined byI(theta) = mathrmE left left( frac uncomplete ell(xtheta) uncompletetheta right)2 right = -mathrmEleft fracpartial2 ell(xtheta) partialtheta2 rightand ell(xtheta)=log f(xtheta) is the natural logarithm of the likelihood function and mathrmE denotes the expected value.The efficiency of an unbiased estimator hattheta measures how close this estimators variance comes to this lower bound estimator efficiency is defined ase(hattheta) = fracI(theta)-1rm var(hattheta)or the minimum possible variance for an unbiased estimator divided by its positive variance. The Cramr-Rao lower bound thus givese(hattheta) le 1.General scalar caseA more general form of the bound can be obtained by considering an unbiased estimator T(X) of a function psi(theta) of the parameter theta. Here, unbiasedness is dumb as stating that ET(X) = psi(theta). In this case, the bound is given bymathrmvar(T) geq fracpsi(theta)2I(theta)where psi(theta) is the derivative of psi(theta) (by theta), and I(theta) is the Fisher information defined above. skip on the variance of biased estimatorsApart from being a bound on estimators of functions of the parameter, this approach can be used to derive a bound on the variance of biased estimators with a given bias, as follows. Consider an estimator hattheta with biasb(theta) = Ehattheta theta, and let psi(theta) = b(theta) + theta. By the result above, any unbiased estimator whose expectation is psi(theta) has variance great than or equal to (psi(theta))2/I(theta). Thus, any estimator hattheta whose bias is given by a function b(theta) satisfiesmathrmvar left(hatthetaright) geq frac1+b(theta)2I(theta).The unbiased version of the bound is a special case of this result, with b(theta)=0.Its trivial to have a small variance an estimator that is constant has a variance of zero. But from the above equation we find that the mean squared errorof a biased estimator is bounded bymathrmEleft((hattheta-theta)2right)geqfrac1+b(theta)2I(theta)+b(theta)2,using the standard decomposition of the MSE. Note, however, that this bound can be less than the unbiased Cramr-Rao bound 1/I(). See the example of estimating variance below. variable caseExtending the Cramr-Rao bound to multiple parameters, define a parameter column vectorboldsymboltheta = left theta_1, theta_2, dots, theta_d rightT in mathbbRdwith probability density function f(x boldsymboltheta) which satisfies the two regularity conditions below.The Fisher information matrix is a d times d matrix with element I_m, k defined asI_m, k = mathrmE left fracddtheta_m log fleft(x boldsymbolthetaright) fracddtheta_k log fleft(x boldsymbolthetaright) right.Let boldsymbolT(X) be an estimator of any vector function of parameters, boldsymbolT(X) = (T_1(X), ldots, T_n(X))T, and denote its expectation vector mathrmEboldsymbolT(X) by boldsymbolpsi(boldsymboltheta). The Cramr-Rao bound then states that the covariance matrix of boldsymbolT(X) satisfiesmathrmcov_boldsymbolthetaleft(boldsymbolT(X)right) geq frac partial boldsymbolpsi left(boldsymbolthetaright) partial boldsymboltheta Ileft(boldsymbolthetaright)-1 left( frac partial boldsy mbolpsileft(boldsymbolthetaright) partial boldsymboltheta right)TwhereThe matrix inequality A ge B is understood to mean that the matrix A-B is positive semi definite, andpartial boldsymbolpsi(boldsymboltheta)/partial boldsymboltheta is the Jacobian matrix whose ijth element is given by partial psi_i(boldsymboltheta)/partial theta_j.If boldsymbolT(X) is an unbiased estimator of boldsymboltheta (i.e., boldsymbolpsileft(boldsymbolthetarig

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.