Image explore different methods for efficiently fusing digital

Image Fusion: A Review of Methods and ApplicationsR.Vijayakumar1,Research Scholar,Dr.SNS Rajalaksmi College of Arts & Science,Coimbatore E-mail id: vijay31kumar@gmail.

comDr.K.Karthikeyan 2Assistant Professor,Department of computer Science,Government Arts & Science College,Palladam, CoimbatoreAbstract: – This paper aims to present a brief overview of the development of image fusion in various algorithms and applications in recent years, and to understand the challenges and ability of image fusion. Various algorithms that are typically employed are covered to comprehend the complexity of usage in different scenarios.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

The normal objective of this paper has gone to explore different methods for efficiently fusing digital images. It’s been found that many of the prevailing researchers have neglected many consequences; i.e. no technique is accurate for different type of circumstances.Keywords: – Image Fusion, fusion algorithm; fusion applications, PCA, DCT, DWTINTRODUCTIONThe requirement for better diagnosis and clear interpretation of the obtained images give rise to image fusion. Sensor fusion had been a fast developing area of research in recent years.

With the increase of the availability of the numbers and types of sensors, the need to manage the increasing quantity of information has produced the need to fuse such data for human to perceive. The ability to combine information and integrate them allows for new capability in myriads of areas. Some examples where sensor fusion is now widely engaged in different methods, are automotive automation, mobile robot navigation, and target tracking. Ideally, this fused image should contain all the information from source images. However, it is not possible. In practice not all the source image information is transferred into the fused image.

Level of image fusionPixel levelFeature levelDecision levelFigure1 Graphical illustration of image fusion processOnly required and necessary information will be transferred. Information loss of source images may occur during the fusion process. Simultaneously, fusion process itself may introduce extra information or false information called “fusion artefacts” into the fused image. In Figure 2, blue portion represents the information transferred from source images in the fused image which is simply referred to as “fusion gain or fusion score”. Green portion indicates the information lost during the fusion process which is termed as “fusion loss”.

This information in source images is not present in the fused image and red portion corresponds to unnecessary information (fusion artifacts) introduced in the fused image. It has no relevance to the source images. Hence, fusion algorithm should consider all these factors for better performance.Through the integration of multiple sensors, there are certain advantages we can achieve, compared with just a single input.

The enhanced reliability, extended parameter coverage, improved resolution are all desirable in any system. While sensor fusion research has improved leaps and bounds in recent years, certainly we are still far away from achieving the competence to mimic the human mind in analyzing different data simultaneously. Due to the multiple sources and types of information being fed continuously, there are various problems that arises, such as data association, sensor uncertainty, and management of data. In most cases, these are usually associated with the inherent ambiguity of each sensors, with device noise and also ambiguities in the environment being measured.

A robust system of sensor fusion should be able to handle such uncertainties, and at the end, provide consistent results of the environment. The image fusion can be performed at three different processing levels according to the stage at which the fusion takes place: pixel, feature and decision level.Figure 2 Image fusion levels1. Pixel level fusion: Pixel-based fusion is performed on a pixel-bypixel basis. It generates a fused image in which information associated with each pixel is determined from a set of pixels in source images to improve the performance of image processing tasks such as segmentation.2. Feature level fusion: Feature-based fusion at feature level requires an extraction of objects recognized in various data sources. It requires extraction of salient features which are depending on their environment such as pixel intensities, edges or textures.

These similar features from input images are fused.3. Decision-level fusion: Decision-based fusion consists of merging information at a higher level of abstraction, combines the results from multiple algorithms to yield a final fused decision. Input images are processed individually for information extraction. The obtained information is then combined by applying decision rules to reinforce common interpretation.Image FusionPixel – Level Image FusionAveragingBroveyPCAWavelet TransformIntensity Hue saturation TransformFeature Level Image FusionNeural NetworksRegion Based segmentationK – Means ClusteringSimilarity matching To content Level RetrievingChoice Level Image FusionFusion Based on Fuzzy and Unsupervised FCMFusion Based On Support Vector MachineFusion Based on the Information Level in the Regions of ImageFigure 3 Image fusion methodsThree levels of fusion are further classified into methods described figure 2.

Section 2 explain existing image fusion related work comparison results, Section 3 problem identification and various existing techniques gaps, section 4 presents different image fusion methods applied in application . Finally, section 5 provides the concluding remarks and future scope of the workLITERATURE REVIEWAfter an in-depth survey, the present thesis identifies that selection of a suitable fusion level depends on available information type. Image fusion is applied to both single-sensor and multi-sensor images.

One of the standard methods of image fusion is principal component analysis (PCA) which can preserve more spatial resolution but introduces more serious spectral resolution i.e., it causes spectral degradation 1.

Goodman and Lee 2 are among the earliest to develop a multi-resolution theory of multi wavelets. Multi wavelets based image fusion can be performed to achieve a better image fusion quality 3. In 2004, Wang proposed a new image fusion algorithm based on multi wavelet transform to fuse multi sensor images 4. One famous multi wavelet filter is the GHM filter proposed by Geronimo, Hardin, and Massopust 5.

The GHM offers a combination of orthogonality, symmetry and compact support, which cannot be achieved by any scalar wavelet basis except for the Haar basis 6. In the Ph.D. work, Strela 7 further extends the theory of multi wavelets.

Yan Na et al. presented an adaptive multiwavelet transform based fusion scheme. Experimental results show that the adaptive multiwavelet transform based fusion method can determine the multiwavelet decomposition level and provide the good fusion results 8. Yan Meng 9 presented a new image fusion algorithm based on multiwavelet transform combined with HPF (high-pass filter). Azzawi et al 10 presented a method for the fusion of medical captured images using different modalities that enhances the original images and combines the complementary information of the various modalities.

This fusion rule is based on principle component analysis (PCA) which depends on frequency component of DT-CCT coefficients (contourlet domain).Riazifar et al. 11 proposed a compression scheme in transform domain and compared the performance of both DWT and CT. Eslami et al. proposed a new image coding scheme based on the wavelet-based contourlet transform using a contourlet-based set partitioning in hierarchical trees algorithm.

Eslami et al. 12 proposed Hybrid Wavelets and DFB (HWD) transform for improvement over the wavelet and contourlet transforms. Eslami et al. also proposed a Wavelet- Based Contourlet Transform (WBCT) which is capable of approximating natural images containing contours and oscillatory patterns, but the major disadvantage is the numbers of directions are doubled at every other wavelet level. Martin et al. 13, proposed a new fusion method for multispectral and panchromatic images that used a highly anisotropic and redundant representation of images. But, it performed fusion through a single directional low pass filter bank with low computational cost, even though CT is a double filter bank structure. Krishnamoorthy et al.

, 14 discussed the implementation of three categories of image fusion algorithms called the basic fusion algorithms, the pyramid based algorithms and the basic DWT algorithms. Further they , developed as an Image Fusion Toolkit – ImFus, using Visual C++ 6.0. Dyla et al., 15 proposed a novel image fusion algorithm based on the nonsubsampled contourlet transform(NSCT) and an image decomposition model (IDM). Shirin Mahmoudi 16 proposed a method which has preserved spectral content as good as wavelet-based methods and also has provided better spatial content than HIS and wavelet-based methods. proposed a new method for fusion by integrating HIS and PCA.

Jia et al., 17 proposed a new method of fusing panchromatic (Pan) and multispectral (MS) remote sensing images based on nonsubsampled contourlet transform (NSCT) and IHS transform. Hui et al., 18 proposed a novel image fusion strategy for panchromatic high resolution image and multispectral image in nonsubsampled contourlet transform (NSCT) domain.

Ghouti et al. proposed image fusion scheme which incorporates the use of balanced multiwavelet transform by using multiple wavelet and scaling functions for the first time. Kannan et al.

19 combined multiwavelet transform, stationary wavelet transform and wavelet packet transform to form multi-stationary wavelet packet transform and its performance is evaluated. Artificial neural network (ANN) proved to be a more powerful and self-adaptive method of pattern recognition as compared to traditional linear and simple nonlinear analysis. Li et al., 20 described the application of ANN to pixel-level fusion of multi-focus images taken from the same scene. Sahoolizadeh et al., 21 proposed a new hybrid approach for face recognition using Gabor Wavelets and Neural Networks. Khosravi et al., 22 proposed a new approach for block feature based image fusion using multiwavelet transform and neural networks and a qualitative analysis has been done for the several testimage and found better results.

In 2004, Jakobson et al. suggested using of cognitive principles to bridge the gap between human and computer image understanding. Some of the first attempts to apply concepts derived from neural models of visual processing and pattern recognition to image fusion and interpretation were quite successful.The present paper integrates various image fusion transforms with neural networks, which plays a significant role in feature extraction and detection in machine learning applications. Artificial neural network seem to be one possible approach to handle the high dimension nature of hyper-spectral satellite sensor data. As the number of image fusion techniques are increasing, there is a growing need for metrics.

In recent years, a number of computational image fusion quality assessment metrics have therefore been proposed ,Zheng et al. 23; Zhu ; Jia 24). Although some of these metrics agree with human visual perception to some extent, most of them cannot predict observer performance for different input imagery and scenarios.Kalman FilteringThe Kalman filter is an ideal statistical recursive data processing algorithm which continuously calculates an estimate of a continuous valued state based on periodic observations of the state. It uses an explicit statistical model of how x(t) changes over time and an explicit statistical model of how observations z(t) which are made are related .

The explicit description of process and observations lets many models of sensor to be easily incorporated in the algorithm. Not only so, we can constantly assess the role of each sensor in the system. Recent research have being done for robot navigation using the extended Kalman filter and the unscented Kalman filter 25.

Support Vector Machine (SVM)Support Vector Machine was proposed in 1963, and the current standard in 1993 24. It is a learning model that analyses data, and extract patterns for classification and regression analysis. Taking a set of training examples, each belonging to one of two classes, SVM assign new examples into either category. It is thus a non-probabilistic binary linear classifier.

The optimized hyperplane should minimize structural errors and maximize the margins between the hyperplane and the closest points 26.SVM is also used to compress information in sensor fusion system that may have limited bandwidth, and large set of data samples are not feasible for real-time processing 27. A two-layer SVM scheme was also purposed and significantly improves the results of a single SVM 28.3.3 Bayesian Inference TechniqueBaye’s rule provides a means of combining observed data with past beliefs about the state of the environment.

It requires that the state of an object or environment described as x, and an observation z, be determined as a joint probability or joint probability distribution P(x,z) for discrete and continuous variables respectively. the previous posterior acts as the current prior information, and provide the new posterior density. Thus, the computation is much less demanding 29.Sequential Monte Carlo methods (Particle filter)Particle Filters are a class of modern sequential Monte Carlo methods 30.

It is based on building a posterior density function using several random samples call particles. The advantage of particle filtering is its ability to represent arbitrary probability densities, when systems are non-Gaussian or nonlinear. Also, the error in calculation is usually an unknown or non-Gaussian, and a probability density function is mandatory. Particle filter works by approximating the probability of the state as weighted sum of random samples, which are predicted, with their weights updated from the likelihood of measurement. This is called sequential importance sampling (SIS).

A resampling step is introduced in newer iteration to prevent filter divergence, and this is done by removing particles with the lowest weights, and creating new particles at points with the highest weight 31. This is called sequential importance resampling. Particle filter have being proven to be effective in distributed sensing environment 32. A number of different types of particle filter exist, and the performance of different one varies when used for certain applications. The choice of importance density is the important factor that determines the performance 33. Dempster-Shafer Theory of EvidenceThe Dempster-Shafer (D-S) evidence theory was proposed by Dempster and later extended mathematically by Shafer.

D-S theory is based on two ideas: obtaining degrees of belief for one question from subjective probabilities for a related question, and using Dempster’s rule to combine the degrees of belief when they are based on independent items of evidence. Studies have been done to compare between Bayesian inference method and Shafer-Dempster method, A recent application presenting human-autonomy sensor fusion in object detection compares the performance between Bayesian, Dempster- Shafer, Dynamic Dempster-Shafer fusion method. Certainly, there are issues with the D-S Theory, like the complexity of computations and also counterintuitive results from conflicting data.

Some common approaches is to use D-S theory with other algorithms to enhance the accuracy and speed 35.Artificial Neural Networks (ANN)ANNs are mathematical models composed of nonlinear computational elements (neurons), operating in parallel and connected as a graph topography characterized by different weighted links. ANNs have proven to be more powerful and more adaptable method, compared to traditional linear or non-linear analyses 36. The layers of processing neurons can be connected in different ways. The neurons can be trained to learn behavior of any system, using sets of training data and learning algorithms to tune the individual weight of the links.

Weights are altered to improve the robustness of the system. Once the errors for the training data have being minimized, the ANNs can remember the functions, and be engaged in further estimations. The data is closely linked with the processing. One major problems currently is determining the best topology for any given problem.

Some factors which determine this are the problem itself, the prospective approach to the problem, and the neural network characteristics. Recent research in robot navigation has successfully used neural networks in sensor fusion 37.Fuzzy LogicFuzzy logic is finding wide-spread popularity as a method to represent uncertainty in high-level fusion. Essentially, it is a type of multi-value logic that allows the uncertainty in multi-sensor fusion to be categorized in the inference process by assigning each proposition a degree of membership from 0 to 1. Fuzzy sensor fusion approach has shown a high degree of certainty and accuracy, although the tradeoffs is the complex computations required 37.

In most application of sensor fusion, a combination of methods is used to exploit the advantages of artificial intelligence method and traditional method. Merges neural network and linearly constrained least squares method, which is shown to be stable and fast. Is able to take different information sources with different noise characteristics and achieve optimized results through the use of fuzzy logic. 38 Integrate Kalman filter with fuzzy logic techniques, and is able to achieve the optimality of Kalman Filter, and the competence of fuzzy systems to handle inconsistent information.TABLE I.

Comparison of Existing Image Fusion TechniquesS.NO Fusion Technique/Algorithm Domain Advantages Dis-advantages1 Simple Average 31 Spatial. This is the simplest method of image fusion. The main disadvantage of Pixel level method is that this method does not give Guarantee to have clear objects from the set of images.2 Simple Maximum 31 Spatial Resulting in highly focused image output obtained from the input image as compared to Average method. Pixel level method are affected by blurring effect which directly affect on the contrast of the image3 Simple Minimum 31 SpatialUsed in case of darker images where darkness relates to noise.

The only perception is that it selects those pixels having minimum intensity and avoid other pixels. 4 PCA 31 Spatial PCA is a tools which transforms number correlated variable into number of uncorrelated variables, this property can be used in image fusion. But spatial domain fusion my produce spectral Degradation.5 DWT 32 Transform The DWT fusion method may outperform the slandered fusion method in terms of minimizing the spectral distortion. It also provide better signal to noise ratio than pixel based approach. In this method final fused image have a less spatial Resolution.6 IHS(intensity, Hue, Saturation ) 31 Transform RGB changes into IHS Easily used for Image sharpening.

• Resolution increases, improves image intensity and enhance it more. Can’t decompose image into frequencies. Color distortion is often significant.

7 BT(Brovery Transform)33 Transform Good to produce RGB images with higher degree of contrast. High contrast pixel values in input. Images are depressed in value in fused image. 8 ICA(Independent Component Analysis)34 Transform • More degree of freedom.

• Not shift invariant 9 Combine DWT, PCA11 Transform Multi-level fusion where the image undergoes fusion twice using efficient fusiontechnique provide improved result .output image contained both high spatial resolution with high quality spectral content. This method is complex in fusion Algorithm. Required good fusion technique for better result.10 Combination of Pixel &Energy Fusion rule15 Transform Preserves boundary information and structuraldetails without Introducing any other inconsistencies to the image. Complexity of Method increases.PROBLEM IDENTIFICATIONSAfter a careful, critical and in-depth literature survey on the existing image fusion methods, the present study found the following major disadvantages.The spatial domain approaches produce spatial distortions in the fused image.

Few standard methods can preserve more spatial resolution but introduce more serious spectral resolution i.e., they cause spectral degradation.Some imaging systems like the fusion algorithm may introduce some amounts of distortion or artefacts in the signal, which leads to the problem of quality assessment.

Few wavelet transform based fusion schemes cannot preserve the salient features in source images efficiently and will probably introduce some artefacts and inconsistencies in the fused results.Few transforms could result a high spatial-high spectral fused image, without any colour and texture distortion, blocking artefact and noise strengthen.Few oldest transforms preserves satisfactory spatial information, but strong spectral distortions aggravate the final fusion result.The classical fusion method causes characteristic degradation, spectral loss, or colour distortion.The isotropic wavelets are scant of shift-invariance and multi directionality and fail to provide an optimal expression of highly anisotropic edges and contours in the images.Table II: Single-scale fusion methods, their advantages and drawbacksSpatial domain MethodsAdvantages DrawbacksAverage, minimum maximum and morphological ( Ardeshir and Nikolov,2007) operators.

Easy to implement. Reduce the contrast or produce brightness or color distortions.Principal component analysis (PCA) ( Yonghong, 1998), independent component analysis (ICA) ( Mitianoudis and stathaki,2007), intensity-hue-saturation (IHS) ( Tu et al,2001) Computationally efficient .May suffer from spectral distortion. May give desirable results for only few fusion datasets.Focus measure ( Huang and Jing, 2007), Bilateral sharpnessCriteria ( Tian et al,2011) May produce desirable results. Applicable to a few datasets. Computationally expensive.

Optimization methods ( shen et al, 2011: Xu and varshney, 2011 ) May produce desirable results. Take multiple iterations. Computationally expensive. Over smoothen fused image.Table III. Discussion on Existing Approaches suggested by various AuthorsS.NO Author Techniques used Benefits Gaps1. Pandit et al.

(2015) 31 Evaluation of Image Fusion algorithms in RS • Acquires more accuracy Elimination of redudancy provides reliable image data. • Color distortion is still a significant problem. 2. Ejaily et al. (2013)34 Improved ICA • The improved ICA image fusion method has provided good quality images with its contrast specific.

• The traditional ICA methods provides the images which are not of good quality as well as the fused imagesare further transform to IHS color space which is bit time consuming. 3. Jin et al. (2011)31 Improved IHS transform helps in using the panchromatic image degradation model. • Feasible and has more efficiency helps to reduce the spectral distortion, the resultant image contains more information and clear. 4.

Prakash et al. (2013)35 Biorthogonal Wavelet Transform (BWT) using Maximum rule • (WT) based method makes fused images better by reducing loss of valuable information and distortions produces by the spatial domain techniques. • Linear phase and symmetry properties helps in retaining information of images like lines, egdes, boundaries curves.

• Brovery transform is limited to three bands and the multiplicative methods introduce distortions. 5. Liu et al. (2013)9 Multifocus image fusion by using lifting scheme of Wavelets • Fastest computational speed takes less memory easier to implement. • Focuses on high frequency details of images.

6. Sharmila et al. (2013)37 Discrete wavelet transform (DWT) with Entropy concepts • Fused images are noise free and contain better quality information. By incorporating multimodality, helps in deriving the very useful information in medical images like MRI, CT. single modality can’t give that much useful information. 7. Kaur et al. (2015) 38 PCA • A large number of input information can be compressed into small amount without any loss of information.

• removes redundancy. • Combination of DWT with PCA improves the performance. • may not be satisfied to fuse high and low resolution multispectral(MS) images. IMAGE FUSION APPLICATIONImage fusion systems have already being applied to different problems, but there are areas of which research is still being carried out, and being developed. Overlap may occur in the following cases, but this is a general attempt to covering the board aspects.Internet of ThingsIn the last decade, the Internet of Things (IoT) has attracted attention from academia 39 and industry, due to its potential to create a smart world where every object are connected to the Internet and communicate with each other with minimum human intervention 40. The IoT requires large amounts of real-time data to offer materials for analysis and action, and sensors are available everywhere, from smart devices (smart phones, tablets), wearables (smartwatches, camera glasses) and healthcare (RFIDs). Approaches to improve this allow IoT to work efficiently 41.

Sensor fusion helps to enable context awareness, a cornerstone for the IoT. By knowing the circumstances or facts that form an event, we can use this information to understand why a situation is as it is, and form suitable action. With about 50 to 100 billion devices projected to connect to the internet by 2020 42, and able to generate data constantly, this present an enormous amount of data to present. Some areas which is expected to see applications are building automation like smart energy consumption control, power grid, environment, industrial 43 and consumer home automation 44.

With cars being equipped with sensors, as well as camera feed on road, information can be generated to keep track of traffic anywhere in the city, and this can be provided back to the users 45.Automotive and NavigationWith cars becoming more sophisticated, developments are focused on improving performance, safety, comfort, environmentally friendliness and assistance to driver. In the area of autonomous driving, various sensors are being featured like GPS (Global Positioning System), LiDAR and ultrasound. Many of these are used to create object representation of both the car and its surrounding 46, and these data can be used to provide a complete view of the driving condition. With the reliance on so many types of sensors, a multi-level fusion process is required, where low level sensor fusion process the massive amount of input data and high-level fusion process provides the real-time decision. With the recent development of RADAR and LiDAR, there is now a smaller demand of on-board camera, as these two produce richer and more accurate 3D representation which helps detect and classify objects better 47. However, LiDAR sensors while provides a better field of coverage, it does not provide speed information, and RADAR gives accurate speed data but is not effective in lanes with curves. Many of these are related to mobile robotics, with path planning and obstacles avoidance being scaled up to facilitate the real world application.

The design of complementary sensors is essential to provide better 3D map 49, or allowing the system to recognize different bodies in the environment, and also the scalability of electronic systems to ensure that no bottlenecking occur during real-time processing 50, with the expected increases in information feed. For safety and collision avoidance, sensor fusion research is being done to improve the quality of detection, especially in preventing false positive cases 51.Quadrotors and DroneDrones and quadrotors are also an emerging field for developing new technologies and methods to ensure the safe operation as well as reliable maneuver 52.

The navigation system of such quadrotor usually consist of three-axis gyroscope, three-axis accelerometer and magnetometer in the navigation system, with a complementary sensor group of pressure altimeter, ultrasonic sensors and GPS 53. Autonomous flight is one aspect of which new progress have allow quadrotor to work independently in places where human may be unable to reach. The cost of implementing fusion is relatively low, but still maintaining a satisfactory performance, allows for the wide availability of consumer level drones. Not only so, due to the robustness of sensor fusion, drones can hover in one fixed position without the need of GPS, through the usage of other sensors 56. It is important to show the reliability of fusion method, that operation will not be compromised despite one sensor input missing.Computer Vision Computer vision, started off as trying to mimic the human vision, though using competing sensors 58. As the understanding of the complexity of perception developed, new sensors, like 3D cameras, have help to augment the ability of computer vision.

It had become an essential part in many applications, for example medical imaging, vision of intelligent robots 59, and nondestructive testing. In recent years, the need to improve the security of the general public, as well as public assets had grown. One big hurdle is the detection of concealed weapons underneath person’s clothing. Several fusion methods have being worked on, for example, multiple images with different exposure, together with infrared images, and combined detection for automatic bag screening at venues like stadium or museums 60.

Virtual reality / Augmented realityA recent development, virtual reality (VR) is an emerging technology that is attracting attention from consumers as an entertainment or educational tool. Some of the current models available are HTC Vive and the Oculus Rift. One of the key challenges of virtual environment is the tracking of head movement. As a user change his viewpoints, the virtual elements must keep their alignment with the observed 3D position and the orientation to real world objects. In addition to the accuracy, the ability to provide stable motions is vital as well. The last challenge is to reduce the latency, defined here as the time between head movement and producing corresponding images to the user’s retina. This had being an early problem which causes VR simulator sickness. A single gyroscope does not give information about the user’s location, while accelerometers’ reading tends to be noisy, and yaw reading cannot be read.

A magnetometer can act like a compass, allowing an orientation estimate, however, this is easily affected by any ferromagnetic metals 61. The current method of sensor fusion uses a weighted filter to determine the information to take from the different sensors, taking the long term accuracy of the accelerometer, while using the gyroscope to do reduction of the noise signal in the short term 62. Predictive tracking methods have also been implemented from the angular speed of the gyroscope, to reduce latency to a rate of 30ms.HealthcareWith aging population becoming a common trend in well developed country, it is important to have ways to monitor or track their health condition without the need to have people monitoring them around the clock 63. Fall detection is an area being researched on, and that is especially helpful for elderly who live alone, with no supervision. Not only does sensor fusion method benefits the elderly, we are able to monitor the development of infant motor functions, with new ability to assessment body postures in infants. General research using body sensors and wireless sensor networks are common, with tracking and identification of human, tracking mental state of patients and attempting to classify individual’s state of the mind by fusing data from various physiological sensors, for example, heart rate, respiration rate, carbon dioxide and oxygen level is being worked on .

Some of the main problems are software side, like reliability of measurement, and network status to prevent false positive from healthcare unit 64.Micro-scale sensors fusionWearable electronics is a big opportunity for sensors, with them able to track user’s activity, healthcare, and sports applications. The micro-electromechanical systems (MEMS) is the technology which enables them. They are now everywhere, from tablets to smart watch to smart phones. Besides MEMS, System on Chip (SoC) solutions are getting more common with the need to incorporate multiple sensors on a single hardware platform. To achieve this, the miniaturization of sensors is an active research area, as the number of sensors in a system will keep increasing.CONCLUSIONThis paper provide a review of image fusion theories, from the models of different algorithms and application of image fusion, to some of the common algorithms being used to enable image fusion, as well as recent researches that is being carried out. New application areas like Internet of Things, automotive and healthcare applications show benefits when image fusion is applied, and there are still a wide range of potential applications that is unable to be covered fully.

Certainly there are still areas of development and research that can help to further advance current level of knowledge. Algorithm fusion is still being debated, to try to focus on the advantages each method have, and using new methods to cover up the weakness of other. New approaches to combine the different level of image fusion and different approaches have to be developed, and a general framework to assess different image fusion technique will be essential to benchmark clearly the different techniques, and to allow us to determine precisely the constraints required for certain system. The accuracy, computational speed and cost of image fusion are the three basic requirements, but in most cases today, only two of them are usually fulfilled for every method.References:Literature reviewYan Luo, Rong Liu, Yu Feng Zhu – “FUSION OF REMOTE SENSING IMAGE BASE ON THE PCA+ATROUS WAVELET TRANSFORM”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol.

XXXVII, Part B7, Beijing, 2008. Goodman T.N.T., Lee S.L.

, “Wavelets of Multiplicity r,” Trans. Of the Amer. Math. Soc.

, vol.342, pp:307-324, 1994. Chui C.

K., Jian-ao L., “A Study of orthonormal multi wavelets”, Applied Numerical Mathematics, vol. 20, no. 3, pp.

273-298(26), March 1996. Hai-hui Wang, “A New Multiwavelet-Based Approach to Image Fusion”, Journal of Mathematical Imaging and Vision, vol.21, no.

2, pp:177-192, September 2004.Donovan G., Geronimo J.S., Hardin D.

P., Massopust P.R., “Construction of orthogonal wavelets using fractal interpolation functions”, preprint, 1994.Yang L., Guo B.L., Li W.

, “Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform”, Neurocomputing, vol. 72, no.1–3, pp:203-211.Strela V., “Multiwavelets: Theory and Applications”, Ph.D Thesis, MIT, 1996. Yan Na ; Manfred Ehlers ; Wanhai Yang, “Adaptive remote sensing image fusion with multiwavelet transform”, Proc.

SPIE 5983, Remote Sensing for Environmental Monitoring, GIS Applications, and Geology V, 598302, October 28, 2005.Yan Meng, “Remote Sensing Image Fusion using Multi-Wavelet Transform Combined with HPF”, Conference Publications, pp:1651–1654, 2-5 July 2007.Al-Azzawi N., Sakim HA.

, Abdullah AK., Ibrahim H., “Medical image fusion scheme using complex contourlet transform based on PCA”, Conf Proc IEEE Eng Med Biol Soc., 5813-6, 2009.Negar Riazifar and Mehran Yazdi, “Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study”, World Academy of Science, Engineering and Technology, vol.49, 2009. Ramin Eslami and Hayder Radha, “New Image Transforms Using Hybrid Wavelets and Directional Filter Banks: Analysis and Design”.

Ramin Eslami and Hayder Radha, “Wavelet-based Contourlet Coding Using an SPIHT-like Algorithm”, IEEE International Conference on Image Processing, Oct 2004.Gonzalo Martin C.E., Lillo Saavedra M., “An Efficient Algorithm For Satellite Images Fusion Based On Contourlet Transform”, Archivo Digital UPM, 2008.Shivsubramani Krishnamoorthy, Soman K.

P., “Implementation and Comparative Study of Image Fusion Algorithms”, International Journal of Computer Applications, vol.9, no.2, November 2010.Ould Mohamed Dyla M.H., Tairi H., “Multi Focus Image Fusion Scheme Using A Combination Of Nonsubsampled Contourlet Transform and an Image Decomposition Model”, Journal of Theoretical and Applied Information Technology, vol.

38, no.2, 30th April 2012Shirin Mahmoudi, “Contourlet-Based Image Fusion using Information Measures”, Proceedings of the 2nd WSEAS International Symposium on WAVELETS THEORY ; APPLICATIONS in Applied Mathematics, Signal Processing ; Modern Science (WAV ’08), Istanbul, Turkey, May 27-30, 2008.Jia Y., Xiao M., “Fusion Of Pan And Multispectral Images Based On Contourlet Transform”, ISPRS TC VII Symposium – 100 Years ISPRS, Vienna, Austria, IAPRS, Vol. XXXVIII, Part. 7B, July 5–7, 2010Yang Xiao-Hui, Jiao Li-Cheng, “Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Transform”, ACTA AUTOMATICA SINICA, vol.

34, no.3, March 2008.K. Kannan, S. Arumuga Perumal, K.

Arulmozhi , “Area level fusion of Multi-focused Images using Multi-Stationary Wavelet Packet Transform”, International Journal of Computer Applications, vol.2, no.1, May 2010Shutao L., Kwok J.T.

, Yaonan W., “Multifocus image fusion using artificial neural networks”. Pattern Recognit.

Lett., vol.23, pp:985– 997, 2002. Hossein Sahoolizadeh, Davood Sarikhanimoghadam, and Hamid Dehghani, “Face Detection using Gabor Wavelets and Neural Networks”, World Academy of Science, Engineering and Technology 45, 2008.Maziyar Khosravi, Mazaheri Amin, “BLOCK FEATURE BASED IMAGE FUSION USING MULTI WAVELET TRANSFORMS”, International Journal of Engineering Science and Technology, vol.

3, no.8, August 2011J. Dong, D.

Zhuang, Y. Huang, and J. Fu, “Advances in multi-sensor data fusion: Algorithms and applications,” Sensors (Basel), vol. 9, no. 10, pp.

7771-84, 2009. D. Jhu, X. jio, N. Clinton, and N. Wang, “An artificial neural network model for estimating crop yields using remotely sensed information,” International Journal of Remote Sensing, vol.

25, no. 9, pp. 1723-1732, May 2004. H. Guanshan, “Neural network applications in sensor fusion for a mobile robot motion,” WASE International Conference on Information Engineering (ICIE, vol. 1, pp. 46-49, Aug 2010. R.

E. Gibson, D. L. Hall, and J.

A. Stover, “An autonomous fuzzy logic architecture for multisensor data fusion,” International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 143-150, 1994. M. A. A.

Akhoundi and E. Valavi, “Multi-sensor fuzzy data fusion using sensors with different characteristics,” arXiv preprint arXiv:1010.6096, 2010.

Y. Xia, H. Leung, and E. Bosse, “Neural data fusion algorithms based on a linearly constrained least square method,” IEEE Trans Neural Netw, vol. 13, no. 2, pp.

320-9, 2002.K. Goebel and W.

Yan, “Hybrid data fusion for correction of sensor drift faults,” IMACS Multiconference on Computational Engineering in Systems Applications, vol. 1, pp. 456-462, Oct 2006.P. J. Escamilla-Ambrosio and N. Mort, “Hybrid kalman filter-fuzzy logic adaptive multisensor data fusion architectures,” Proceedings 42nd IEEE Conference on Decision and Control, pp. 5215-5220, Dec 2003.

Pandit, vaibhav R., and R. j.

Bhiwani, “Image fusion in Remote Sensing Applications: A review, “International Journal of Computer Applications (june2015) Vol. 120 No. 10. S. S. Malik, S. P.

P. Kumar, and G. B. Maruthi,”DT-CWT: Feature level image fusion based on dual-tree complex wavelet transform,” in International Conference on Information Communication and Embedded Systems (ICICES), pp. 1-7, Feb 2014.

Mandhare, R. Ashok and P. Upadhyay, “Pixel-level Image fusion using Brovey Transform and Wavelet Transform.” International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, June 2013 Vol. 2, no. 6, pp. 2690 – 2695.

A. E. Ejaily, M. Y. E.

Nahas, and G.Ismail. “A New image fusion technique to improve the quality of remote sensing images,” International Journal of computer Science Issues (IJCSI), Vol.10, Issue 1, (january 2013) pp. 565-569Prakash, Om, R. Srivastava, and Ashish Khare, “Biorthogonal Wavelet Transform Based Image Fusion Using Absolute Maximum Fusion Rule.” Conference on Information and Communication Technologies (ICT 2013). Proceedings of 2013 IEEE, 2013 pp.

577-582. Liu, Lixin, H. Bian, and G.

shao, “An Effective Wavelet based scheme for multi-focus image fusion.” International Conference on Mechatronics and Automation. japan: IEEE, 2013 pp.

1720-1725.K. Sharmila, S. Rajkumar and V.

Vijayarajan, “Hybrid Method for Multimodality Medical Image Fusion using Discrete Wavelet Transform and Entropy concepts with Quantitative Analysis,” International Conference on Communication and Signal Processing. INDIA: IEEE, April 2013 pp 489-493. V. Kaur and J. Kaur, “Comparison of Image Fusion Techniques: Spatial and Transform Domain based Techniques.” International Journal Of Engineering And Computer Science ISSN:2319-7242, May 2015, pp. 12109-12112.K.

S. Yeo, M. C. Chian, T. C. W. Ng, and D. A.

Tuan, “Internet of things: Trends, challenges and applications,” 2014 14th International Symposium on Integrated Circuits (Isic), pp. 568-571, 2014.H. Sundmaeker, P. Guillemin, P. Friess, and S. Woelffle, “Vision and challenges for realizing the internet of things,” European Commission Information Society and Media2010.

F. H. Bijarbooneh, W. Du, E. C. H. Ngai, X. M.

Fu, and J. C. Liu, “Cloud-assisted data fusion and sensor selection for internet of things,” IEEE Internet of Things Journal, vol. 3, no. 3, pp. 257-268, Jun 2016.

Zaslavsky, C. Perera, and D. Georgakopoulos, “Sensing as a service and big data,” arXiv preprint arXiv:1301.0159, 2013.F. Kirsch, R. Miesen, and M. Vossiek, “Precise local-positioning for autonomous situation awareness in the internet of things,” 2014 IEEE Mtt-S International Microwave Symposium (Ims), pp. 1-4, 2014.C.-L. Wu, Y. Xie, S. K. Pradhan, L.-C. Fu, and Y.-C. Zeng, “Unsupervised context discovery based on hierarchical fusion of heterogeneous features in real smart living environments,” Automation Science and Engineering (CASE), 2016 IEEE International Conference on, pp. 1106-1111, 2016.S. Wildstrom. (2012). Better living through big data. Available: http://newsroom.cisco.com/feature/778800/BetterP. Bonnifait, P. Bouron, P. Crubille, and D. Meizel, “Data fusion of four ABS sensors and GPS for an enhanced localization of car-like vehicles.,” Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Conference on, pp. 1597-1602, 2001.F. Mujica, “Scalable electronics driving autonomous vehicle technologies,” Texas Instruments2014. M. Renato, E. Fernandez-Moral, and P. Rives, “Dense accurate urban mapping from spherical rgb-d images.,” Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 6259-6264, 2015.E. Cardarelli, L. Sabattini, C. Secchi, and C. Fantuzzi, “Cloud robotics paradigm for enhanced navigation of autonomous vehicles in real world industrial applications,” 2015 IEEE/Rsj International Conference on Intelligent Robots and Systems (Iros), pp. 4518- 4523, 2015.Westenberger, M. Muntzinger, M. Gabb, M. Fritzsche, and K. Dietmayer, “Time-to-collision estimation in automotive multisensory fusion with delayed measurements,” Advanced Microsystems for Automotive Applications, pp. 13-20, 2013.S. Roelofsen, D. Gillet, and A. Martinoli, “Reciprocal collision avoidance for quadrotors using on-board visual detection,” 2015 IEEE/Rsj International Conference on Intelligent Robots and Systems (Iros), pp. 4810-4817, 2015.X. J. Wei, “Autonomous control system for the quadrotor unmanned aerial vehicle,” 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (Urai), pp. 796-799, 2016.M. Tailanian, S. Paternain, R. Rosa, and R. Canetti, “Design and implementation of sensor data fusion for an autonomous quadrotor,” 2014 IEEE International Instrumentation and Measurement Technology Conference (I2mtc) Proceedings, pp. 1431-1436, 2014.W. Zheng, J. Wang, and Z. F. Wang, “Multi-sensor fusion based real-time hovering for a quadrotor without GPS in assigned position,” Proceedings of the 28th Chinese Control and Decision Conference (2016 Ccdc), pp. 3605-3610, 2016.Eitel, J. T. Springenberg, L. Spinello, M. Riedmiller, and W. Burgard, “Multimodal deep learning for robust RGB-d object recognition,” 2015 IEEE/Rsj International Conference on Intelligent Robots and Systems (Iros), pp. 681-687, 2015.E. M. Upadhyay and N. K. Rana, “Exposure fusion for concealed weapon detection,” 2014 2nd International Conference on Devices, Circuits and Systems (Icdcs), pp. 1-6, Mar 2014.M. Sagi-Dolev, “Multi-threat detection system,” U.S. Patent 8171810, 2012.D. Gebre-Egziabher, G. H. Elkaim, J. D. Powel, and B. W. Parkinson, “Calibration of strapdown magnetometers in magnetic field domain,” Journal of Aerospace Engineering, vol. 19, no. 2, pp. 87-102, Apr 2006.Favre, B. M. Jolles, O. Siegrist, and K. Aminian, “Quaternionbased fusion of gyroscopes and accelerometers to improve 3d angle measurement,” Electronics Letters, vol. 42, no. 11, pp. 612-614, May 25 2006.H. Medjahed, D. Istrate, J. Boudy, J. L. Baldinger, and B. Dorizzi, “A pervasive multi-sensor data fusion for smart home healthcare monitoring,” IEEE International Conference on Fuzzy Systems (Fuzz 2011), pp. 1466-1473, Jun 2011.Rihar, M. Mihelj, J. Paši?, J. Kolar, and M. Munih, “Using sensory data fusion methods for infant body posture assessment.,” Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 292-297, 2015.S. Knoop, S. Vacek, and R. Dillmann, “Sensor fusion for 3D human body tracking with an articulated 3d body model.,” Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pp. 1686-1691, 2006.M. T. Yang and S. Y. Huang, “Appearance-based multimodal human tracking and identification for healthcare in the digital home,” Sensors (Basel), vol. 14, no. 8, pp. 14253-77, Aug 05 2014.S. Begum, S. Barua, and M. U. Ahmed, “Physiological sensor signals classification for healthcare using sensor data fusion and case-based reasoning,” Sensors (Basel), vol. 14, no. 7, pp. 11770- 85, Jul 03 2014.H. Lee, K. Park, B. Lee, J. Choi, and R. Elmasri, “Issues in data fusion for healthcare monitoring,” Proceedings of the 1st international conference on Pervasive Technologies Related to Assistive Environments 2008.

x

Hi!
I'm Casey!

Would you like to get a custom essay? How about receiving a customized one?

Check it out