1.
1 Introduction 1.1.1 Digital image processing A digital image is an array of real or complex numbers represented by a finite number of bits. The term digital image processing generally refers to processing of two dimensional picture by a digital computer.
We Will Write a Custom Essay Specifically
For You For Only $13.90/page!
order now
Fig 1.1: Typical Digital Image ProcessingAn image is given in the form of a slide, photograph or chart, is first digitized and stored as a matrix of binary digits in computer memory. This digitized image can be processed and / or displayed on a high-resolution TV monitor. For display, the image is stored in a rapid access buffer memory, which refreshes the monitor at rate of 30 frames/sec to produce a visibly continuous Mini or microcomputers are used to communicate and control all the digitization, storage processing and display operations.
Figure 1.2 shows the steps in a typical image processing sequence: Fig 1.2: Typical Image Processing SequenceObject: An object is a material thing that can be seen and touched.
In this project, the object is a compound leaf.Imaging system: It will be a device which will be used to capture the images of the object.Digital Storage Disk: this will be a hardware device used to store the captured image of the object.Digital Computer: Here the digital computer means the software application installed in the computer.
This project plays the role of Digital Computer.Display: Here the display will be the monitor on which we can see the output.Initially the object’s image is captured by the imaging system, a Camera (SONY W110 CYBERSHOT in this project); the captured image will be stored in the Digital storage device (Hard disk or Memory card). Now the images will be used in the Digital Computer (this Project), all the processing will be done in this module according to the instructions given in the project. Finally the output of the project will be displayed in the display device.1.1.3 Phases of image processing The different phases of image processing are: 1.
Image representation and modeling 2. Image enhancement 3. Image restoration 4. Image analysis 5. Image reconstruction 6.
Image data compression. Image Representation and modeling: Images are represented as a collection if overlapping patches Pi (with associated features: visual words, mean color, etc.).Patches are generated by a number of objects (spatial extend represented by blobs) and a background. In each image the number of blobs, their positions are not known. Blobs are associated with labels. Given the blobs and their parameters, the patches in an image are assumed to be independentImage enhancement: The aim of image enhancement is to improve the interpretability of information in images for human viewers, or to provide `better’ input for other automated image processing techniques.
Image enhancement techniques can be divided into two broad categories: 1. Spatial domain methods, which operate directly on pixels, and 2. Frequency domain methods, which operate on the Fourier transform of an image.Image Restoration: It refers to the recovery of an original signal from degraded observations.
Image restoration is different from image enhancement in that the latter is designed to emphasize features of the image that make the image more pleasing to the observer, but not necessarily to produce realistic data from a scientific point of view. Image enhancement techniques (like contrast stretching or de-blurring by a nearest neighbor procedure) provided by “Imaging packages” use no a priori model of the process that created the image. Image Analysis: It is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques.
Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.Image reconstruction: Digital image reconstruction is a robust means by which the underlying images hidden in blurry and noisy data can be revealed. The main challenge is Sensitivity to measurement noise in the input data, which can be magnified strongly resulting in large artifacts in the reconstructed image. The cure is to restrict the permitted images.Image data compression: The goal of image data compression is to represent an image as accurately as possible using the fewest number of pixels.¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬This project uses the phases, Image Representation, Image Enhancement, Image Analysis, and Image Reconstruction to achieve the goal of project.1.
1.4 What is a leaf?In botany, a leaf is an above-ground plant organ specialized for the process of photosynthesis. Leaves are typically flat (laminar) and thin, which evolved as a means to maximize the surface area directly exposed to light.
Likewise, the internal organization of leaves has evolved to maximize exposure of the photosynthetic organelles, the chloroplasts, to light and to increase the absorption of carbon dioxide, in a process called photosynthesisTypes of leavesThe leaf blade has two types of configuration. It may be in one unit, in which case the leaf is called a simple leaf, or it may be divided into numerous small parts that look like individual leaves and which form a compound leaf.Compound LeafA leaf composed of a number of leaflets on a common stalk, arranged either palmately, as the fingers of a hand, or pinnately, as the leaflets of a fern; the leaflets themselves may be compound is called compound leaf.Types of Compound LeavesFig 1.3: Types of Compound LeavesEvenpinnate: Leaflets are attached along an extension of the petiole called a rachis; there is an even number of leaflets.
Oddpinnate Leaflets are attached along an extension of the petiole called a rachis; there is a terminal leaflet and therefore an odd number of leaflets.Oddpinnate (Alternate): Leaflets are attached along an extension of the petiole called a rachis; there is a terminal leaflet and therefore an odd number of leaflets. And these leaflets are attached alternatively to the rachis.Bipinnate/Twice pinnate: compound leaf dissected twice with leaflets arranged along rachillae that are attached to the rachis.Tripinnate: This type of compound leaf can also be called as thrice pinnately compound pinnate; a compound leaf with leaflets attached to secondary rachillae that are in turn attached to rachillae, which are borne on the rachis.Tetrafoliate: A compound leaf with four leaflets.Palmate: Leaflets are attached to the tip of the petiole.
TerminologyPetiole: The stalk of a leaf. Leaflet: One of the parts of a compound leaf, leaflets does NOT have auxillary buds.1.
1.5 Classification Classification includes a broad range of decision-theoretic approaches to the identification of images (or parts thereof). All classification algorithms are based on the assumption that the image in question depicts one or more features (e.g.
, geometric parts in the case of a manufacturing classification system, or spectral regions in the case of remote sensing) and that each of these features belongs to one of several distinct and exclusive classes. The classes may be specified a priori by an analyst (as in supervised classification) or automatically clustered (i.e. as in unsupervised classification) into sets of prototype classes, where the analyst merely specifies the number of desired categories. (Classification and segmentation have closely related objectives, as the former is another form of component labeling that can result in segmentation of various features in a scene.
) How It WorksImage classification analyzes the numerical properties of various image features and organizes data into categories. Classification algorithms typically employ two phases of processing: training and testing. In the initial training phase, characteristic properties of typical image features are isolated and, based on these, a unique description of each classification category, i.e. training class, is created. In the subsequent testing phase, these feature-space partitions are used to classify image features.
The description of training classes is an extremely important component of the classification process. In supervised classification, statistical processes (i.e.
based on an a priori knowledge of probability distribution functions) or distribution-free processes can be used to extract class descriptors. Unsupervised classification relies on clustering algorithms to automatically segment the training data into prototype classes. In either case, the motivating criteria for constructing training classes are: • independent, i.e. a change in the description of one training class should not change the value of another, • discriminatory, i.e. different image features should have significantly different descriptions, and • Reliable, all image features within a training group should share the common definitive descriptions of that group.
A convenient way of building a parametric description of this sort is via a feature vector (v1,v2,..vn) where n is the number of attributes which describe each image feature and training class.
This representation allows us to consider each image feature as occupying a point, and each training class as occupying a sub-space (i.e. a representative point surrounded by some spread, or deviation), within the n-dimensional classification space. Viewed as such, the classification problem is that of determining to which sub-space class each feature vector belongs. For example, consider an application where we must distinguish two different types of objects (e.
g. bolts and sewing needles) based upon a set of two attribute classes (e.g. length along the major axis and head diameter). If we assume that we have a vision system capable of extracting these features from a set of training images, we can plot the result in the 2-D feature space, shown in Figure . Fig 1.4: Classification example Feature space: + ? sewing needles, o? bolts. 1.
2 Statement of the ProblemThe main objective of the project is automatic classification of Compound leaves as Bipinnate, Tripinnate, Tetrafoliate, and Palmate using Back Propogation Neural Network.1.3 Scope of the Study It is applicable to a wide range of applications like, leaf recognition in Botany, Ayurveda, Forest, Horticulture, and used by farmers.1.
4 Chapter Summary • Chapter-1: Preamble- This chapter tells about the Introduction to digital image processing, basic concepts of leaf and its types, definition and working of Classification process, statement of the problem, Scope of the project, and Methodology. • Chapter-2: Requirements and Analysis- This chapter specifies Software and Hardware requirements, basic information about MATLAB. • Chapter-3: Literature Review- This chapter majorly deals with all the findings and observation which is conducted as feasibility study before the actual development of the project. • Chapter-4: System Design- This chapter deals with Proposed Design.
Which shows different modules, such as Segmentation, Feature Extraction, Database Creation (where the shape features values of images are stored), and Neural Network Classifier. • Chapter-5: System Implementation- This chapter deals with set of MATLAB functions used in this project.• Chapter-6: Results and Discussions- This chapter deals with the GUI of the project to show the output of the application.• Chapter-7: Conclusions & Future Scope- This chapter concludes the project and it also suggest some of the future enhancements which couldn’t be covered up due to constraint of time and resources.• Bibliography- This section mainly highlights all the journal and case-study paper being referred throughout the development cycle of the project.1.5 Methodology Select a folder containing various classes of leaves for Database Creation. 1.
Database Creation 2. Segmentation3. Feature Extraction 4. Train the Neural Network a. Back Propogation Neural Network5. Select Test imagea.
Extract Test Features 6. Classification7. Classified Result